Doula Isham Rashik Hasan's picture

Doula Isham Rashik Hasan

disham993

AI & ML interests

Machine Learning, Deep Learning, Natural Language Processing

Recent Activity

View all activity

Organizations

scikit-learn's profile picture Keras Dreambooth Event's profile picture Hugging Face Discord Community's profile picture

disham993's activity

reacted to m-ric's post with 🚀🔥👍 3 days ago
view post
Post
2588
Less is More for Reasoning (LIMO): a 32B model fine-tuned with 817 examples can beat o1-preview on math reasoning! 🤯

Do we really need o1's huge RL procedure to see reasoning emerge? It seems not.
Researchers from Shanghai Jiaotong University just demonstrated that carefully selected examples can boost math performance in large language models using SFT —no huge datasets or RL procedures needed.

Their procedure allows Qwen2.5-32B-Instruct to jump from 6.5% to 57% on AIME and from 59% to 95% on MATH, while using only 1% of the data in previous approaches.

⚡ The Less-is-More Reasoning Hypothesis:
‣ Minimal but precise examples that showcase optimal reasoning patterns matter more than sheer quantity
‣ Pre-training knowledge plus sufficient computational resources at inference levels up math skills

➡️ Core techniques:
‣ High-quality reasoning chains with self-verification steps
‣ 817 handpicked problems that encourage deeper reasoning
‣ Enough inference-time computation to allow extended reasoning

💪 Efficiency gains:
‣ Only 817 examples instead of 100k+
‣ 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data

This really challenges the notion that SFT leads to memorization rather than generalization! And opens up reasoning to GPU-poor researchers 🚀

Read the full paper here 👉  LIMO: Less is More for Reasoning (2502.03387)
upvoted an article 3 days ago
view article
Article

Introducing the Synthetic Data Generator - Build Datasets with Natural Language

105
upvoted an article 21 days ago