Update README.md
Browse files
README.md
CHANGED
@@ -2,4 +2,122 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
<div align="center">
|
6 |
+
|
7 |
+
<h1> R1-Router: Learning to Route Queries across Knowledge Bases for Step-wise Retrieval-Augmented Reasoning </h1>
|
8 |
+
|
9 |
+
|
10 |
+
<h5 align="center">
|
11 |
+
|
12 |
+
<a href='https://arxiv.org/abs/2505.22095'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
|
13 |
+
<a href='https://huggingface.co/hmhm1229/R1-Router'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue'>
|
14 |
+
<a href='https://huggingface.co/hmhm1229/R1-Router-3B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue'>
|
15 |
+
|
16 |
+
Chunyi Peng<sup>1,3</sup>,
|
17 |
+
Zhipeng Xu<sup>1</sup>,
|
18 |
+
Zhenghao Liu<sup>1</sup>,
|
19 |
+
Yishan Li<sup>3</sup>,
|
20 |
+
Yukun Yan<sup>2</sup>,
|
21 |
+
Zhiyuan Liu<sup>2</sup>,
|
22 |
+
Yu Gu<sup>1</sup>
|
23 |
+
Minghe Yu<sup>1</sup>
|
24 |
+
Ge Yu<sup>1</sup>
|
25 |
+
Maosong Sun<sup>2</sup>
|
26 |
+
|
27 |
+
<sup>1</sup>Northeastern University, <sup>2</sup>Tsinghua University, <sup>3</sup>ModleBest Inc.
|
28 |
+
|
29 |
+
<h5 align="center"> If you find this project useful, please give us a star🌟.
|
30 |
+
</h5>
|
31 |
+
</div>
|
32 |
+
|
33 |
+
## News
|
34 |
+
8.22 We upload [R1-Router-3B](https://huggingface.co/hmhm1229/R1-Router-3B).
|
35 |
+
|
36 |
+
## Environment
|
37 |
+
For training, answer generation, and evaluation processes:
|
38 |
+
```bash
|
39 |
+
conda create -n router python=3.11
|
40 |
+
conda activate router
|
41 |
+
pip install requirements_router.txt
|
42 |
+
```
|
43 |
+
For retriever and corpus construction processes:
|
44 |
+
```bash
|
45 |
+
conda create -n retriever python=3.11
|
46 |
+
conda activate retriever
|
47 |
+
pip install requirements_retriever.txt
|
48 |
+
```
|
49 |
+
|
50 |
+
## Corpora Construction
|
51 |
+
For the text corpus, you can download `enwiki-20241020` from [Huggingface](https://huggingface.co/datasets/hmhm1229/enwiki-20241020). Then preprocess, and index it with the following commands:
|
52 |
+
```bash
|
53 |
+
7z x enwiki-20241020-pages-articles-multistream.xml.zip.001
|
54 |
+
conda activate retriever
|
55 |
+
wikiextractor enwiki-20241020-pages-articles-multistream.xml.bz2 -o wiki_extracted
|
56 |
+
python wiki_preprocess.py
|
57 |
+
```
|
58 |
+
For the image corpus, you can directly download [M-BEIR](https://huggingface.co/datasets/TIGER-Lab/M-BEIR). To embed and index it you can follow the [repository](https://github.com/TIGER-AI-Lab/UniIR)
|
59 |
+
|
60 |
+
For the table corpus, you can download, embed and index Open-WikiTable following the [repository](https://github.com/sean0042/Open_WikiTable), or you can download directly the one we have already preprocessed from [here](https://huggingface.co/hmhm1229/table-retriever).
|
61 |
+
|
62 |
+
## Retrievers Preparation
|
63 |
+
For the Text-Image Retriever, you can directly download [UniIR](https://huggingface.co/TIGER-Lab/UniIR)
|
64 |
+
|
65 |
+
For the Table Retriever, you can train it with the help of [repository](https://github.com/sean0042/Open_WikiTable), or you can download it directly from [here](https://huggingface.co/hmhm1229/table-retriever).
|
66 |
+
|
67 |
+
## Datasets
|
68 |
+
We have prepared all the text datasets in `./datasets`, for images you need to download them from:
|
69 |
+
- `InfoSeek:` InfoSeek images can be downloaded from [OVEN](https://github.com/open-vision-language/oven/tree/main/image_downloads)
|
70 |
+
- `Dyn-VQA:` Dynamic VQA images can be downloaded from [DynVQA_en.202412](https://github.com/Alibaba-NLP/OmniSearch/blob/main/dataset/DynVQA_en/DynVQA_en.202412.jsonl)
|
71 |
+
- `WebQA:` WebQA images can be downloaded from [Google Drive](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb)
|
72 |
+
|
73 |
+
## Training
|
74 |
+
If you do not want to train the model, you can download [R1-Router](https://huggingface.co/hmhm1229/R1-Router) and skip this section to [Evaluation](#evaluation)
|
75 |
+
### Data Synthesis
|
76 |
+
If you want to use the ready-to-use synthetic data directly, you can skip this section to [Step-GRPO Training](#step-grpo-training)
|
77 |
+
|
78 |
+
First, we need to synthesis the data step by step:
|
79 |
+
```bash
|
80 |
+
bash src/data_synthesis/data_synthesis.sh
|
81 |
+
```
|
82 |
+
### Step-GRPO Training
|
83 |
+
Our training framework is based on [EasyR1](https://github.com/hiyouga/EasyR1), only you need to do is to download it and replace some files with the files in `./Easy-R1`.
|
84 |
+
Then start training with the command:
|
85 |
+
```bash
|
86 |
+
conda activate router
|
87 |
+
bash examples/run_qwen2_5_vl_7b_stepgrpo.sh
|
88 |
+
```
|
89 |
+
## Evaluation
|
90 |
+
We provide the evaluation pipeline for the R1-Router:
|
91 |
+
```bash
|
92 |
+
bash evaluation.sh
|
93 |
+
```
|
94 |
+
or, you can just evaluate the results we have provided by:
|
95 |
+
```bash
|
96 |
+
conda activate router
|
97 |
+
cd src
|
98 |
+
python evaluate.py --dataset_name all --method "r1-router3"
|
99 |
+
```
|
100 |
+
|
101 |
+
## Acknowledgement
|
102 |
+
Our work is built on the following codebases, and we are deeply grateful for their contributions.
|
103 |
+
- [EasyR1](https://github.com/hiyouga/EasyR1)
|
104 |
+
- [UniIR](https://huggingface.co/TIGER-Lab/UniIR)
|
105 |
+
- [Open-WikiTable](https://github.com/sean0042/Open_WikiTable)
|
106 |
+
- [OmniSearch](https://github.com/Alibaba-NLP/OmniSearch)
|
107 |
+
|
108 |
+
## Citation
|
109 |
+
We appreciate your citations if you find our paper related and useful to your research!
|
110 |
+
```
|
111 |
+
@article{peng2025r1,
|
112 |
+
title={Learning to Route Queries across Knowledge Bases for Step-wise Retrieval-Augmented Reasoning},
|
113 |
+
author={Peng, Chunyi and Xu, Zhipeng and Liu, Zhenghao and Li, Yishan and Yan, Yukun and Wang, Shuo and Liu, Zhiyuan and Gu, Yu and Yu, Minghe and Yu, Ge and Sun, Maosong},
|
114 |
+
year={2025}
|
115 |
+
url={https://arxiv.org/abs/2505.22095},
|
116 |
+
}
|
117 |
+
```
|
118 |
+
|
119 |
+
## Contact Us
|
120 |
+
If you have questions, suggestions, and bug reports, please email us, we will try our best to help you.
|
121 |
+
```
|
122 | |
123 |
+
```
|