Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,149 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
<div align="center">
|
5 |
+
|
6 |
+
# Open Reasoner Zero
|
7 |
+
|
8 |
+
<img src="figure/logo.jpg" width="300"/>
|
9 |
+
|
10 |
+
<div>
|
11 |
+
<!-- I want to use a tide emoji here -->
|
12 |
+
|
13 |
+
An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
|
14 |
+
</div>
|
15 |
+
</div>
|
16 |
+
|
17 |
+
<div align="center" style="line-height: 1;">
|
18 |
+
<a href="https://huggingface.co/Open-Reasoner-Zero" target="_blank"><img alt="Hugging Face"
|
19 |
+
src="https://img.shields.io/badge/HuggingFace-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor"/></a>
|
20 |
+
|
21 |
+
<a href="https://yasminezhang.notion.site/Open-Reasoner-Zero-19e12cf72d418007b9cdebf44b0e7903" target="_blank">
|
22 |
+
<img alt="Notion Page"
|
23 |
+
src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white"/></a>
|
24 |
+
|
25 |
+
<br>
|
26 |
+
<a href="https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf"><b>Paper PDF Link [WIP]</b>ποΈ</a>
|
27 |
+
</div>
|
28 |
+
|
29 |
+
<div>
|
30 |
+
<br>
|
31 |
+
|
32 |
+
</div>
|
33 |
+
|
34 |
+

|
35 |
+
|
36 |
+
*Figure 1 | Evaluation performance of Open-Reasoner-Zero-\{7B, 32B\}. We report the average accuracy on the benchmark dataset for each question with 16 responses. Notably, Open-Reasoner-Zero-32B outperforms DeepSeek-R1-Zero-Qwen-32B on the GPQA Diamond benchmark while only requiring 1/30 of the training steps. We are continuing to scale up these RL settings until this preprint is released, as there is no sign of saturation.*
|
37 |
+
|
38 |
+

|
39 |
+
*Figure 2 | Train Time Scale up both on Reward and Response Length of Open-Reasoner-Zero-{7B, 32B}.*
|
40 |
+
|
41 |
+
## Overview
|
42 |
+
π We introduce **Open-Reasoner-Zero**, the first open source implementation of large-scale reasoning-oriented RL training focusing on scalability, simplicity and accessibility.
|
43 |
+
|
44 |
+
To enable broader participation in this pivotal moment we witnessed and accelerate research towards artificial general intelligence (AGI),
|
45 |
+
we release our source code, parameter settings, training data, and model weights.
|
46 |
+
Please refer to our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) for more insights.
|
47 |
+
|
48 |
+
**Let the Reasoner-Zero tide rise!**
|
49 |
+
|
50 |
+
## Releases π¦
|
51 |
+
|
52 |
+
<strong>[2025/02/18]</strong>
|
53 |
+
We release `Open-Reasoner-Zero`.
|
54 |
+
|
55 |
+
As part of this release, we open-source:
|
56 |
+
- π [Paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) on our comprehensive analysis and insights in Reasoner-Zero training
|
57 |
+
- π€ HF Model [`Open-Reasoner-Zero-7B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B) and [`Open-Reasoner-Zero-32B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-32B)
|
58 |
+
- π [`Our curated 57k training data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data)
|
59 |
+
- π [Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground) to enjoy your own Reasoner-Zero journey!
|
60 |
+
|
61 |
+
## Key Features in Codebase π
|
62 |
+
|
63 |
+
- Adopt single controller trainer design, flexible and researcher-friendly.
|
64 |
+
- Colocate training and generation in the same GPUs to maximize GPU utilization.
|
65 |
+
|
66 |
+
## Getting Started π
|
67 |
+
### Installation & Training Scripts
|
68 |
+
We release our [Dockerfile](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/docker/Dockerfile) in [docker](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/docker) folder to facilitate the reproducibility of our training.
|
69 |
+
|
70 |
+
To install the package, run:
|
71 |
+
```bash
|
72 |
+
pip install -e .
|
73 |
+
```
|
74 |
+
|
75 |
+
#### Start Orz-7B PPO Training
|
76 |
+
debug running command in single node:
|
77 |
+
```bash
|
78 |
+
DEBUG_MODE=True python -m playground.orz_7b_ppo
|
79 |
+
```
|
80 |
+
|
81 |
+
Multi-node Training:
|
82 |
+
|
83 |
+
first on master node, run:
|
84 |
+
```bash
|
85 |
+
ray start --head
|
86 |
+
```
|
87 |
+
|
88 |
+
then on other nodes, run:
|
89 |
+
```bash
|
90 |
+
ray start --address='<master-node-ip>:<master-node-port>'
|
91 |
+
```
|
92 |
+
|
93 |
+
then on master node, run:
|
94 |
+
```bash
|
95 |
+
python -m playground.orz_7b_ppo
|
96 |
+
```
|
97 |
+
|
98 |
+
Your training log will be shown in the master node terminal.
|
99 |
+
|
100 |
+
#### Start Orz-32B PPO Training
|
101 |
+
running command in 8 nodes:
|
102 |
+
|
103 |
+
first on master node, run:
|
104 |
+
```bash
|
105 |
+
ray start --head
|
106 |
+
```
|
107 |
+
|
108 |
+
then on other nodes, run:
|
109 |
+
```bash
|
110 |
+
ray start --address='<master-node-ip>:<master-node-port>'
|
111 |
+
```
|
112 |
+
|
113 |
+
then on master node, run:
|
114 |
+
```bash
|
115 |
+
python -m playground.orz_32b_ppo
|
116 |
+
```
|
117 |
+
|
118 |
+
Your training log will be shown in the master node terminal.
|
119 |
+
|
120 |
+
### Data
|
121 |
+
|
122 |
+
We release all of 57k curated high-quality training data in the [`data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data) folder.
|
123 |
+
|
124 |
+
The details for how to collect data are described in our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf).
|
125 |
+
|
126 |
+
## Acknowledgements
|
127 |
+
|
128 |
+
- This work was supported by computing resources and valuable feedback provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
|
129 |
+
- Our training framework is built on [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [vllm](https://github.com/vllm-project/vllm), [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) and [ray](https://github.com/ray-project/ray).
|
130 |
+
- Our model is based on [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) and [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B).
|
131 |
+
- We thank [Project Numina](https://projectnumina.ai/) and [Tulu3](https://allenai.org/blog/tulu-3-technical) for their collected open sourced data.
|
132 |
+
|
133 |
+
## Advertisement Time π£
|
134 |
+
|
135 |
+
We are hiring talented researchers and engineers to join our team. If you are interested in our project and would like to contribute to the reasoner scale-up all the way to AGI, please feel free to reach out to us at [email protected]
|
136 |
+
|
137 |
+
|
138 |
+
[](https://star-history.com/#Open-Reasoner-Zero/Open-Reasoner-Zero&Timeline)
|
139 |
+
|
140 |
+
## Citation
|
141 |
+
|
142 |
+
```bibtex
|
143 |
+
@misc{OpenReasonerZero2025,
|
144 |
+
title={Open-Reasoner-Zero: An Open Source Approach to Scaling Reinforcement Learning on the Base Model},
|
145 |
+
author={Jingcheng Hu and Yinmin Zhang and Qi Han and Daxin Jiang and Xiangyu Zhang, Heung-Yeung Shum},
|
146 |
+
year={2025},
|
147 |
+
howpublished={\url{https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero}},
|
148 |
+
}
|
149 |
+
```
|