Llama-3.2-Gitara-1B
Gitara = git + ara (the parrot genus): your local stochastic parrot for git commands.
A 1B parameter function-calling model fine-tuned by Distil Labs to translate plain English into git commands. The smallest Gitara variant, optimized for resource-constrained environments while maintaining strong accuracy.
Model Details
| Developed by | Distil Labs GmbH |
| Model type | Causal language model, fine-tuned for function calling |
| Language | English |
| License | Llama 3.2 Community License |
| Fine-tuned from | meta-llama/Llama-3.2-1B-Instruct |
Use Case
Given a natural language description of a git operation, the model outputs a structured JSON tool call that can be converted to an executable git command.
Supported commands: status · add · commit · push · pull · branch · switch · restore · merge · stash · rebase · reset · log
Example
Input:
push feature-x to origin, override any changes there and track it
Output:
{"name": "git_push", "parameters": {"remote": "origin", "branch": "feature-x", "force": true, "set_upstream": true}}
Resulting command:
git push origin feature-x --force --set-upstream
More Examples
| Query | Tool Call Output |
|---|---|
what's in the latest stash, show diff |
{"name": "git_stash", "parameters": {"action": "show", "patch": true}} |
undo last commit but keep the changes |
{"name": "git_reset", "parameters": {"mode": "soft", "ref": "HEAD~1"}} |
show 8 commits for current branch with graph |
{"name": "git_log", "parameters": {"limit": 8, "graph": true}} |
merge vendor branch preferring ours |
{"name": "git_merge", "parameters": {"branch": "vendor", "strategy": "ours"}} |
Training
The model was trained using knowledge distillation from the teacher model GPT-OSS-120B.
| Training method | LoRA fine-tuning |
| Seed examples | ~100 (manually validated) |
| Synthetic examples | 10,000 |
| Teacher model | GPT-OSS-120B |
Training Process
- Created ~100 seed examples covering all 13 git commands with realistic query phrasings
- Expanded seed data to 10,000 synthetic training examples using the Distil Labs platform
- Fine-tuned Llama 3.2 1B Instruct using LoRA
- Validated on held-out test set
Training data and configuration available in the GitHub repository.
Evaluation
Evaluated on 50 held-out test examples. Accuracy is measured by parsing outputs into normalized Python dicts and comparing for structural equality.
| Model | Parameters | Accuracy |
|---|---|---|
| GPT-OSS-120B (teacher) | 120B | 0.92 ± 0.02 |
| Llama 3.2 3B Instruct (tuned) | 3B | 0.92 ± 0.01 |
| Llama 3.2 1B Instruct (tuned) | 1B | 0.90 ± 0.01 |
| Llama 3.2 3B Instruct (base) | 3B | 0.12 ± 0.05 |
| Llama 3.2 1B Instruct (base) | 1B | 0.00 ± 0.01 |
The tuned 1B model achieves 0.90 accuracy while being 120x smaller than the teacher. The base 1B model completely fails (0.00 accuracy), confirming that fine-tuning is essential.
When to Use 1B vs 3B
| Choose 1B | Choose 3B |
|---|---|
| Memory-constrained devices | Maximum accuracy needed |
| Faster inference required | Complex or ambiguous queries |
| 0.90 accuracy is acceptable | Edge cases matter |
How to Use
With Ollama (Recommended)
# Download model
huggingface-cli download distil-labs/Distil-gitara-v2-Llama-3.2-1B-Instruct --local-dir distil-model
# Build with Ollama
cd distil-model
ollama create gitara-1b -f Modelfile
# Run
ollama run gitara-1b "show staged changes with diffs"
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "distil-labs/Distil-gitara-v2-Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# See GitHub repo for full tool-calling implementation
For complete usage instructions, see the GitHub repository.
Inference Speed
Faster than the 3B variant due to smaller size. Suitable for on-device deployment on mobile or embedded systems.
Limitations
- Accuracy is 0.90, meaning approximately 1 in 10 queries may produce incorrect output
- Limited to the 13 supported git commands and their common options
- Does not support
git checkout(useswitchandrestoreinstead) - Single-turn only; does not support multi-step workflows
- May struggle more with ambiguous or complex queries compared to 3B variant
Model Sources
| Homepage | https://distillabs.ai |
| Repository | https://github.com/distil-labs/distil-gitara |
| Blog post | https://distillabs.ai/blog/gitara |
| Contact | [email protected] |
Related Models
- Llama-3.2-Gitara-3B — Larger variant (0.92 accuracy)
Citation
@misc{gitara2025,
author = {Distil Labs},
title = {Gitara: A Function-Calling Git Agent},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/distil-labs/Distil-gitara-v2-Llama-3.2-1B-Instruct}
}
- Downloads last month
- 21