DAGGER
Collection
8 items
•
Updated
DAGGER-12B-SFT is a supervised fine-tuned model for computational graph generation in Bangla mathematical reasoning. This is the SFT-only variant, serving as both a standalone model and initialization for GRPO training.
| Attribute | Value |
|---|---|
| Base Model | Gemma-3-12B-Instruct |
| Training | Supervised Fine-Tuning |
| Parameters | 12B |
| LoRA Rank | 64 |
| Max Sequence Length | 4096 |
| Dataset | Original | +Distractor | Drop |
|---|---|---|---|
| MGSM | 70.0 | 56.8 | 13.2 |
| MSVAMP | 76.8 | 65.4 | 11.5 |
| Weighted Avg | - | - | 66.7 |
| Model | Weighted Avg Accuracy |
|---|---|
| dagger-12B_SFT | 66.7 |
| dagger-12B_SFT_GRPO | 69.4 (+2.7) |
GRPO provides +2.7 points improvement over SFT alone.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "dipta007/dagger-12B_SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
USER_PROMPT_TEMPLATE = """You are an expert Bengali Math Reasoner. Your task is to solve mathematical problems by constructing a "Computational Graph".
### Graph Rules:
- `id`: Unique identifier (e.g., "n1", "n2").
- `val`: The raw number extracted from text (for input nodes).
- `op`: The operation (`add`, `sub`, `mul`, `div`, `round`, `sqrt`, `floor`, `sum`, `mean`, `ratio_split`). Use `const` for input numbers.
- `args`: List of input node IDs.
- `distractor`: Boolean (`true` / `false`). Set to `true` if the node is NOT used in the final calculation path.
- `label`: Label for the node.
### Available Operations:
- Input: `const` (Use this for all numbers found in text or constants).
- Arithmetic: `add`, `sub`, `mul`, `div`, `abs` (absolute difference).
- Logic/Stats: `sum`, `mean`, `min` (minimum), `max` (maximum).
- Rounding: `round` (nearest int), `floor` (round down), `ceil` (round up).
- Advanced: `sqrt`, `pow`, `mod` (remainder), `gcd`, `lcm`.
- Output: `identity` ("final_result" points to the answer node)
Only output a JSON graph representing the solution, nothing else. Nodes must be topologically sorted, and there must be exactly one "final_result" node that represents the final answer. One example is provided below.
### Example:
Question:
মিনার কাছে ১২২১৯৫ টা কলম আছে। রাজুর কাছে ২৫০৮৪ টা কলম আছে। মিনা রাজুর কাছে ১১২৬ টি কলম চাইল। রাজু ১০০০ টি কলম দিতে রাজি হল, কিন্তু পরে আর দিলেনা। প্রতিটি কলমের দাম ৪৫.৬ টাকা। মিনা যদি কলমগুলো বিক্রি করতে চায়, সে কত টাকা পাবে?
Output:
```json
{{
"nodes": [
{{"id": "n1", "op": "const", "val": 122195, "distractor": false, "label": "মিনার কলম"}},
{{"id": "n2", "op": "const", "val": 25084, "distractor": true, "label": "রাজুর কলম"}},
{{"id": "n3", "op": "const", "val": 1126, "distractor": true, "label": "মিনা রাজুর কাছে চাইল"}},
{{"id": "n4", "op": "const", "val": 1000, "distractor": true, "label": "রাজু দিতে রাজি হল"}},
{{"id": "n5", "op": "const", "val": 45.6, "distractor": false, "label": "প্রতিটি কলমের দাম"}},
{{"id": "total_money", "op": "mul", "args": ["n1", "n5"], "distractor": false, "label": "মিনার মোট টাকা"}},
{{"id": "final_result", "op": "identity", "args": ["total_money"], "distractor": false, "label": "চূড়ান্ত উত্তর"}}
]
}}```
### Your Task:
Question:
{question}
Output:
"""
question = "রজারের 5টি টেনিস বল আছে। সে আরও 2 ক্যান টেনিস বল কিনেছে। প্রতিটি ক্যানে 3টি করে টেনিস বল আছে। তার কাছে এখন কতগুলি টেনিস বল আছে?"
prompt = USER_PROMPT_TEMPLATE.format(question=question)
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
# Generate
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7, top_p=0.8)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
print(response)
| Parameter | Value |
|---|---|
| LoRA Rank / Alpha | 64 / 128 |
| Global Batch Size | 256 |
| Epochs | 4 |
| Learning Rate | 1e-5 → 1e-6 |
| Optimizer | AdamW |
| Weight Decay | 0.001 |
| Precision | BF16 |
| Model | Training | Performance |
|---|---|---|
| dagger-12B_SFT | SFT | 66.7 |
| dagger-12B_SFT_GRPO | SFT → GRPO | 69.4 |
| dagger-12B_GRPO | Base → GRPO | 69.4 |
@misc{nazi2026dagdaggerdistractorawaregraphgeneration,
title={{\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems},
author={Zabir Al Nazi and Shubhashis Roy Dipta and Sudipta Kar},
year={2026},
eprint={2601.06853},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.06853},
}
Totally Free + Zero Barriers + No Login Required