MATH-LLaMA-2-7B-KG / README.md
Xinhe-Li's picture
Update README.md
60781c0 verified
---
license: llama2
datasets:
- LoRID-Math/MATH
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
library_name: peft
tags:
- math
- reasoning
---
# LoRID: A Reasoning Distillation Method via Multi-LoRA Interaction
📃 [Paper](https://arxiv.org/abs/2508.13037) • 💻 [Code](https://github.com/Xinhe-Li/LoRID) • 🤗 [HF Repo](https://huggingface.co/LoRID-Math)
## Abstract
The models for "[Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction](https://arxiv.org/abs/2508.13037)" [IJCAI 2025].
## Key Contributions
- We focus on the mathematical reasoning distillation task and propose a novel method **LoRID**, which draws inspiration from the human beings teaching and learning pattern.
- We introduce knowledge during data augmentation and propose multi-LoRA interaction during model distillation, which improves the student’s reasoning abilities.
- Experimental results show that with the interaction between System 1 and System 2, **LoRID** outperforms previous state-of-the-art approaches and can be easily and effectively integrated into any Chain-of-Thought distillation method.
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@misc{li2025largemodelsteachstudent,
title={Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction},
author={Xinhe Li and Jiajun Liu and Peng Wang},
year={2025},
eprint={2508.13037},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.13037},
}
```