Model Introduction
First generation Reasoning Coder LLM. We utilize GRPO reward fine tuning llama 3.2 3B model with integrated agent capabilities
DeepSeek’s GRPO (Group Relative Policy Optimization) is a reinforcement learning algorithm that trains reasoning models without needing a value function, thereby reducing memory and computational costs compared to methods like PPO. Unsloth leverages GRPO to transform standard language models (up to 15B parameters) into reasoning models, requiring as little as 5GB of VRAM—drastically cutting hardware needs compared to earlier setups. Notably, GRPO is now compatible with efficient fine-tuning techniques like QLoRA and LoRA, and in tests, even minimal training (e.g., 100 steps on Phi-4) enabled the model to exhibit enhanced reasoning capabilities, such as generating a “thinking token” and producing correct answers. For details on unsloth RL fine-tuning Reasoning - GRPO & RL
How to use
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Generate python code for snake <reasoning></reasoning>"},
]
pipe = pipeline("text-generation", model="EpistemeAI/R01R-Llama-3.2-3B-Agent007-Coder")
pipe(messages)
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : EpistemeAI/Llama-3.2-3B-Agent007-Coder
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 2
Model tree for EpistemeAI/R01R-Llama-3.2-3B-Agent007-Coder
Base model
EpistemeAI/Llama-3.2-3B-Agent007-Coder