|
--- |
|
license: apache-2.0 |
|
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit |
|
tags: |
|
- finetuned |
|
- lora |
|
--- |
|
|
|
# HKT-vul-DeepSeek-R1-8b-it-v0.2 |
|
|
|
This is a LoRA fine-tuned version of [unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit). |
|
|
|
## Training Details |
|
- Base Model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit |
|
- Fine-tuning Method: LoRA |
|
- Merge Method: merge_and_unload() |
|
|
|
## Usage |
|
|
|
### Install necessary libraries |
|
```python |
|
import os |
|
if "COLAB_" not in "".join(os.environ.keys()): |
|
!pip install unsloth |
|
else: |
|
# Do this only in Colab notebooks! Otherwise use pip install unsloth |
|
!pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo |
|
!pip install sentencepiece protobuf datasets huggingface_hub hf_transfer |
|
!pip install --no-deps unsloth |
|
``` |
|
|
|
### Install model |
|
```python |
|
from unsloth import FastLanguageModel |
|
import torch |
|
max_seq_length = 5000 # Choose any! We auto support RoPE Scaling internally! |
|
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ |
|
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. |
|
|
|
model_name = "weifar/unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit" |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = model_name, |
|
max_seq_length = max_seq_length, |
|
dtype = dtype, |
|
load_in_4bit = load_in_4bit, |
|
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf |
|
) |
|
|
|
model = FastLanguageModel.get_peft_model( |
|
model, |
|
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 |
|
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", |
|
"gate_proj", "up_proj", "down_proj",], |
|
lora_alpha = 16, |
|
lora_dropout = 0, # Supports any, but = 0 is optimized |
|
bias = "none", # Supports any, but = "none" is optimized |
|
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes! |
|
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context |
|
random_state = 3407, |
|
use_rslora = False, # We support rank stabilized LoRA |
|
loftq_config = None, # And LoftQ |
|
) |
|
``` |
|
|
|
### Use model |
|
```python |
|
FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
|
|
|
inputs = tokenizer(eval_prompt, return_tensors = "pt").to("cuda") |
|
|
|
from transformers import TextStreamer |
|
text_streamer = TextStreamer(tokenizer) |
|
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 4000) |
|
``` |
|
|
|
|