nasos10's picture
Update README.md
044f316 verified
metadata
license: mit
datasets:
  - nvidia/OpenMathInstruct-2
base_model:
  - meta-llama/Meta-Llama-3-8B
library_name: transformers

MuToR: Multi-Token prediction with Registers

Arxiv: https://arxiv.org/abs/2505.10518

TL;DR: MuToR is a simple, plug-and-play approach for multi-token prediction. It leverages dummy register tokens to predict multiple targets in the future, enriching the supervisory signal and improving performance across diverse settings and modalities. The register tokens are discarded on inference, leaving generation speed unchanged.


Model Description

This model is a finetuned version of Llama 3 8B. It was finetuned using the MuToR method for 5 epochs on the 1M-MATH training split. Please refer to our code for guidelines on how to use the models to reproduce our results.