OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH
This repository contains the OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH
model, an Intuitor-fine-tuned version of Allenai/OLMo-2-1124-7B-SFT
trained on the MATH dataset, as presented in the paper Learning to Reason without External Rewards.
Intuitor is a reinforcement learning method that fine-tunes Large Language Models (LLMs) using self-certainty—the model’s own internal confidence—as the sole reward. It is built on a novel paradigm called Reinforcement Learning from Internal Feedback (RLIF), enabling models to learn without any external rewards, gold labels, or verifiers.
Usage
You can load this model using the Hugging Face transformers
library. For detailed instructions on how to use, train, and evaluate the model, please refer to the official GitHub repository:
GitHub Repository: sunblaze-ucb/Intuitor
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Example for text generation
prompt = "Question: What is 2 + 2?
Answer:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Citation
If you find our work helpful or inspiring, please feel free to cite it:
@article{zhao2025learning,
title={Learning to Reason without External Rewards},
author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal={arXiv preprint arXiv:2505.19590},
year={2025}
}
- Downloads last month
- 27