Geraldxm nielsr HF Staff commited on
Commit
1a56eb4
·
verified ·
1 Parent(s): 6874618

Improve model card: Add pipeline tag, library name, and paper link (#1)

Browse files

- Improve model card: Add pipeline tag, library name, and paper link (45905a768e2c61da0ed905bb179dada7bd655449)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
- ## TMLR-Group-HF/Self-Certainty-Qwen3-8B-Base
5
 
6
- This is the Qwen3-8B-Base model trained by Self Certainty method using MATH training set.
7
 
8
- If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].
9
 
10
  ## Citation
11
 
12
  ```
13
  @article{zhang2025coreward,
14
- title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement},
15
  author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
16
  journal={arXiv preprint arXiv:2508.00410}
17
  year={2025},
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
 
6
 
7
+ This repository contains the `Self-Certainty-Qwen3-8B-Base` model, which is a Qwen3-8B-Base model fine-tuned using the Self-Certainty method on the MATH training set, as described in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
8
 
9
+ If you are interested in the Co-rewarding framework, you can find more details and the full implementation on the GitHub repository: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
10
 
11
  ## Citation
12
 
13
  ```
14
  @article{zhang2025coreward,
15
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
16
  author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
17
  journal={arXiv preprint arXiv:2508.00410}
18
  year={2025},