File size: 7,391 Bytes
1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 06b6afd 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 1678bcb 63220d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: Sahibsingh12/gemma3-1b-thinking
tags:
- generated_from_trainer
- trl
- grpo
- peft
- adapter
licence: license
datasets:
- openai/gsm8k
---
## Model Information
This project uses `Sahibsingh12/gemma3-1b-thinking`, which is a PEFT (Parameter-Efficient Fine-Tuning) adapter for `google/gemma-3-1b-it`. Unlike a full model, this is a lightweight adapter that works alongside the base model, making it easier to distribute and use with limited resources.
The model was trained using [TRL](https://github.com/huggingface/trl) with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Training Approach
This adapter was fine-tuned with Reinforcement Learning to enhance reasoning capabilities:
- Used reasoning chains from [OpenAI's GSM8K dataset](https://huggingface.co/datasets/openai/gsm8k)
- Implemented GRPO reward functions
- Based on [Will Brown's approach](https://gist.github.com/willccbb/4676755236bb08cab5f4e54a0475d6fb)
- Training implementation from [Ben Burtenshaw's Colab](https://x.com/ben_burtenshaw/status/1900202583056068663)
### Training Details
- **Base Model**: google/gemma-3-1b-it
- **Library**: transformers
- **Training Method**: GRPO (from DeepSeekMath paper)
- **PEFT Method**: LoRA (Low-Rank Adaptation)
- **Framework Versions**:
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- PEFT: 0.9.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Requirements
```
torch
transformers
peft
```
## Installation
1. Clone this repository or download the script
2. Install the required packages:
```bash
pip install torch transformers peft
```
## Usage
### Running with PEFT Adapter
Since this is a PEFT adapter, you need to load both the base model and the adapter:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
# Load the base model and tokenizer
base_model_id = "google/gemma-3-1b-it"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto", # Automatically determine the device
torch_dtype="auto" # Use the appropriate precision
)
# Load the PEFT adapter
adapter_model_id = "Sahibsingh12/gemma3-1b-thinking"
model = PeftModel.from_pretrained(model, adapter_model_id)
# Generate text
prompt = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
inputs = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Format Example
For chat-formatted inputs:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model and tokenizer
base_model_id = "google/gemma-3-1b-it"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype="auto"
)
# Load the PEFT adapter
adapter_model_id = "Sahibsingh12/gemma3-1b-thinking"
model = PeftModel.from_pretrained(model, adapter_model_id)
# Prepare chat messages
messages = [
{"role": "user", "content": "Calculate the area of a circle with radius 5cm"}
]
# Format messages for the model
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
inputs = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Using the Pipeline API
For a simpler approach (note: this may download the full adapter model):
```python
from transformers import pipeline
# Initialize the pipeline with the adapter model
generator = pipeline(
"text-generation",
model="Sahibsingh12/gemma3-1b-thinking",
model_kwargs={"device_map": "auto", "torch_dtype": "auto"}
)
# Generate text
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
output = generator(
[{"role": "user", "content": question}],
max_new_tokens=128,
do_sample=True,
temperature=0.7,
return_full_text=False
)[0]
print(output["generated_text"])
```
## Available Command-Line Arguments
If you use the command-line script, the following arguments are available:
| Argument | Description | Default |
|----------|-------------|---------|
| `--prompt` | Input text for generation | "If you had a time machine..." |
| `--base-model` | Hugging Face base model name | "google/gemma-3-1b-it" |
| `--adapter` | Hugging Face adapter model name | "vinhnx90/gemma3-1b-thinking" |
| `--device` | Computing device (cpu, cuda, mps, or auto) | "auto" |
| `--max-tokens` | Maximum number of new tokens to generate | 128 |
| `--temperature` | Sampling temperature | 0.7 |
| `--top-p` | Top-p sampling parameter | 0.9 |
## Citations
### Implementation References
- **Will Brown's Approach**: [GitHub Gist](https://gist.github.com/willccbb/4676755236bb08cab5f4e54a0475d6fb)
- **Ben Burtenshaw's Implementation**: [Twitter/X Post](https://x.com/ben_burtenshaw/status/1900202583056068663)
### GRPO
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
### TRL
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
### PEFT
```bibtex
@misc{peft,
title = {{PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware}},
author = {Younes Belkada and Thomas Wang and Yasmine Manar and Ajay Brahmakshatriya and Huu Nguyen and Yongwei Zhou and Soumya Batra and Neil Band and Romi Ponciano and Suraj Patil and Colin Raffel and Siddhartha Kamalakara and Enrico Shippole and Vesselin Popov and Lewis Tunstall and Brian Mugo and Patrick von Platen and Clémentine Fourrier and Surya Dantuluri and Luke Vilnis and Adam P. Saxton},
year = 2023,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/peft}}
}
```
## License
This project is licensed under the same license as the base model.
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. |