Model Card for Router

This model is fine-tuned to serve as a router for reasoning tasks, classifying input queries into one of three categories:

no_reasoning – Direct factual lookup or simple recall (e.g., "What is the capital of France?")

low_reasoning – Requires light reasoning such as simple arithmetic, comparisons, or single logical steps (e.g., "If John has 5 apples and eats 2, how many are left?")

high_reasoning – Requires multi-step reasoning, deep logical chains, or complex problem-solving (e.g., "Prove that the sum of two even numbers is always even").

Quick start

from transformers import pipeline

pipe = pipeline("text-generation", model="d-s-b/Router")
messages = [
    {"role": "user", "content": "what is capital of india"}
]
pipe(messages)

Training Details

Method: Supervised fine-tuning with SFTTrainer

Objective: Multi-class classification with labels (no_reasoning, low_reasoning, high_reasoning)

Dataset: Custom dataset of queries annotated with reasoning levels.

Limitations & Bias

May misclassify borderline queries (e.g., between low_reasoning and high_reasoning).

Performance depends on the diversity of training data.

Inherits any biases from the base Gemma 3 270M model.

Framework versions

  • TRL: 0.21.0
  • Transformers: 4.55.1
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.0.0
  • Tokenizers: 0.21.4

Citations

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
@article{gemma_2025,
    title={Gemma 3},
    url={https://arxiv.org/abs/2503.19786},
    publisher={Google DeepMind},
    author={Gemma Team},
    year={2025}
}

Downloads last month
19
Safetensors
Model size
268M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for d-s-b/Router

Finetuned
(205)
this model

Dataset used to train d-s-b/Router