PEFT
code
instruct
mistral
File size: 1,955 Bytes
1a2360b
 
6d03d90
1a2360b
 
 
6dfbbee
 
 
 
 
 
 
 
 
 
 
 
1a2360b
 
 
b34c57a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
library_name: peft
license: apache-2.0
---
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions


- PEFT 0.5.0

- ---
library_name: peft
tags:
- code
- instruct
- gpt2
datasets:
- HuggingFaceH4/no_robots
base_model: gpt2
license: apache-2.0
---

### Finetuning Overview:

**Model Used:** gpt2

**Dataset:** HuggingFaceH4/no_robots  

#### Dataset Insights:

[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.

#### Finetuning Details:

With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:

- Was achieved with great cost-effectiveness.
- Completed in a total duration of 3mins 40s for 1 epoch using an A6000 48GB GPU.
- Costed `$0.101` for the entire epoch.

#### Hyperparameters & Additional Details:

- **Epochs:** 1
- **Cost Per Epoch:** $0.101
- **Total Finetuning Cost:** $0.101
- **Model Path:** gpt2
- **Learning Rate:** 0.0002
- **Data Split:** 100% train 
- **Gradient Accumulation Steps:** 4
- **lora r:** 32
- **lora alpha:** 64

#### Prompt Structure
```
<|system|> <|endoftext|> <|user|> [USER PROMPT]<|endoftext|> <|assistant|> [ASSISTANT ANSWER] <|endoftext|>
```
#### Training loss :

![training loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/9bgb518kFwtDsFtrHzmTu.png)

license: apache-2.0