TMLR-Group-HF/GT-Llama-3.2-3B-Instruct

This is the Llama-3.2-3B-Instruct model trained by GRPO Ground Truth method using MATH training set.

If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].

Citation

@article{zhang2025coreward,
      title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement}, 
      author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
      journal={arXiv preprint arXiv:2508.00410}
      year={2025},
}
Downloads last month
20
Safetensors
Model size
3.61B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/GT-Llama-3.2-3B-Instruct

Quantizations
1 model

Collection including TMLR-Group-HF/GT-Llama-3.2-3B-Instruct