File size: 895 Bytes
044f316 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
license: mit
datasets:
- nvidia/OpenMathInstruct-2
base_model:
- meta-llama/Meta-Llama-3-8B
library_name: transformers
---
#### MuToR: Multi-Token prediction with Registers
Arxiv: [https://arxiv.org/abs/2505.10518](https://arxiv.org/abs/2505.10518)
**TL;DR**: **MuToR** is a simple, plug-and-play approach for multi-token prediction.
It leverages dummy register tokens to predict multiple targets in the future, enriching the supervisory signal and improving performance across diverse settings and modalities. The register tokens are discarded on inference, leaving generation speed unchanged.
---
#### Model Description
This model is a finetuned version of **Llama 3 8B**. It was finetuned using the MuToR method for 5 epochs on the 1M-MATH training split.
Please refer to our [code](https://github.com/nasosger/MuToR) for guidelines on how to use the models to reproduce our results. |