You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Introduction

First generation Reasoning Coder LLM. We utilize GRPO reward fine tuning llama 3.2 3B model with integrated agent capabilities

DeepSeek’s GRPO (Group Relative Policy Optimization) is a reinforcement learning algorithm that trains reasoning models without needing a value function, thereby reducing memory and computational costs compared to methods like PPO. Unsloth leverages GRPO to transform standard language models (up to 15B parameters) into reasoning models, requiring as little as 5GB of VRAM—drastically cutting hardware needs compared to earlier setups. Notably, GRPO is now compatible with efficient fine-tuning techniques like QLoRA and LoRA, and in tests, even minimal training (e.g., 100 steps on Phi-4) enabled the model to exhibit enhanced reasoning capabilities, such as generating a “thinking token” and producing correct answers. For details on unsloth RL fine-tuning Reasoning - GRPO & RL

How to use

# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Generate python code for snake <reasoning></reasoning>"},
]
pipe = pipeline("text-generation", model="EpistemeAI/R01R-Llama-3.2-3B-Agent007-Coder")
pipe(messages)

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : EpistemeAI/Llama-3.2-3B-Agent007-Coder

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for EpistemeAI/R01R-Llama-3.2-3B-Agent007-Coder

Finetuned
(1)
this model