You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to not use the model for generating research papers for direct submission or official publication without explicit disclosure of AI assistance.

Log in or Sign Up to review the conditions and access this model content.

CycleResearcher: Automated Research via Reinforcement Learning with Iterative Feedback

HomePage: https://wengsyx.github.io/Researcher/

This model is the original version of CycleResearcher, and its results are consistent with those reported in the paper. However, considering that it potentially has harmful effects to some extent, downloading this model will require manual review. Unless you can provide a detailed research plan and specific reasons for why you need to use this model, we will not approve your download and distribution of this model.

You can directly download the model with additional safety alignment: https://huggingface.co/WestlakeNLP/CycleResearcher-72B

For the safe version of the model, you only need to provide the necessary information, and your download will be automatically approved.

Model Specifications

Model Name Pre-training Language Model HF Link
CycleResearcher-ML-12B Mistral-Nemo-Instruct-2407 🤗 link
CycleResearcher-ML-72B Qwen2.5-72B-Instruct 🤗 link
CycleResearcher-ML-123B Mistral-Large-2 🤗 link

Model Info

The CycleResearcher model series includes two main variants:

  1. ML Series: Specifically trained for machine learning research, including computer vision (CV), natural language processing (NLP), and multimedia (MM)
  2. Science Series: Extended to broader scientific domains (Coming soon)

All models have undergone extensive training on our Research-8k dataset and are optimized using the CycleReviewer feedback loop. According to our license, all models and their derivatives cannot be used for generating papers without proper disclosure of AI assistance. We also provide FastDetectGPT-based tools to detect potential misuse of these models.

Model Release Date: October 2024
Knowledge Cutoff Date: October 2024

Open Source License

The code in this repository is open-sourced under the Apache-2.0 license. The model weights are open-sourced under the CycleResearcher-License.

Model Performance

Results on research paper generation evaluated by CycleReviewer:

Paper Type Source Avg Min Score ↑ Avg Max Score ↑ Avg Score ↑ Accept Rate
Conference Accept Papers† Human Expert 3.91 6.98 5.69 100.00%
Preprint Papers Human Expert 3.24 6.62 5.24 29.63%
AI Scientist AI 2.20 5.70 4.31 0.00%
CycleResearcher-12B AI 3.47 6.75 5.36 35.13%
CycleResearcher-72B AI 3.65 6.58 5.38 33.64%
CycleResearcher-123B AI 3.31 6.42 5.13 21.19%

Detecting misuse of CycleResearcher

To ensure the responsible use of our models, we implemented the Fast-DetectGPT method to classify whether a paper is machine-generated. Detection performance comparison across different formats. The human samples are from the test sets of Research-8k and Reviewer-5k.

Model Format Accuracy F1 Score
Researcher-12B Paper 98.38% 98.37
Researcher-72B Paper 97.52% 97.49
Researcher-123B Paper 98.88% 98.87

Installation

pip install cycleresearcher
pip install torch>=2.0.0
pip install transformers>=4.44.0
pip install vllm  # Optional, for faster inference

Requirements

  • Python >= 3.8
  • PyTorch >= 2.0.0
  • Transformers >= 4.44.0
  • CUDA >= 11.8 (for GPU acceleration)

System Requirements

Recommended configurations for different model sizes:

Model Recommended Config Minimum Config
CycleResearcher-12B 2x H100 80G 1x H100 80G
CycleResearcher-72B 8x H100 80G 4x H100 80G
CycleResearcher-123B 8x H100 80G 8x H100 80G

Quick Start

Using Transformers

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Initialize model
model_name = "WestlakeNLP/CycleResearcher-12B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto",
    max_memory={i: "24GiB" for i in range(torch.cuda.device_count())},
)

# Generation parameters
generation_config = {
    "max_length": 19000,
    "temperature": 0.1,
    "top_p": 0.95,
    "pad_token_id": None,
    "do_sample": True,
}

# Prepare system prompt and input
system_prompt = """You are a research assistant AI tasked with generating a scientific paper based on provided literature. Follow these steps:
1. Analyze the given References. 
2. Identify gaps in existing research to establish the motivation for a new study.
3. Propose a main idea for a new research work.
4. Write the paper's main content in LaTeX format, including:
 - Title
 - Abstract
 - Introduction
 - Related Work
 - Methods/
5. Generate experimental setup details in JSON format to guide researchers.
6. After receiving experimental results in JSON format, analyze them.
7. Complete the paper by writing:
 - Results
 - Discussion
 - Conclusion
 - Contributions
Ensure all content is original, academically rigorous, and follows standard scientific writing conventions."""

# Reference input should be in BibTeX format
references = """@article{Qiu2020PretrainedMF,
  title={Pre-trained models for natural language processing: A survey},
  author={Xipeng Qiu and Tianxiang Sun and Yige Xu and Yunfan Shao and Ning Dai and Xuanjing Huang},
  journal={Science China Technological Sciences},
  year={2020},
  volume={63},
  pages={1872 - 1897}
}
@article{Long2022VisionandLanguagePM,
  title={Vision-and-Language Pretrained Models: A Survey},
  author={Siqu Long and Feiqi Cao and Soyeon Caren Han and Haiqing Yang},
  journal={IJCAI},
  year={2022},
}
@inproceedings{Klicpera2019DiffusionIG,
  title={Diffusion Improves Graph Learning},
  author={Johannes Klicpera and Stefan Wei{\ss}enberger and Stephan G{\"u}nnemann},
  booktitle={Neural Information Processing Systems},
  year={2019}
  
The above content represents the relevant literature in this field. Please analyze it and provide the motivation and main idea. Then, provide the Title, Abstract, Introduction, Related Work, and Methods sections in LaTeX format.
"""

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": references}
]

# Generate paper
prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, **generation_config)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)

Using VLLM (Recommended for faster inference)

from vllm import LLM, SamplingParams

# Initialize model with VLLM
model = LLM(
    model="WestlakeNLP/CycleResearcher-12B",
    tensor_parallel_size=8,
    max_model_len=15000,
    gpu_memory_utilization=0.95,
)

# Generation parameters
sampling_params = SamplingParams(
    temperature=0.4,
    top_p=0.95,
    max_tokens=4096
)

# Generate paper
outputs = model.generate([prompt], sampling_params)

Input Data Format

CycleResearcher expects reference input in BibTeX format with abstracts. Example format:

@article{example2023,
    title = {Sample Paper Title},
    author = {Author, A. and Author, B.},
    journal = {Journal Name},
    year = {2024},
    abstract = {This is a sample abstract that provides context...}
}

Abstract: This is a sample abstract that provides context...
@article{example2024,
    title = {Sample Paper Title},
    author = {Author, A. and Author, B.},
    journal = {Journal Name},
    year = {2024},
}

Output Format

The model generates output with the following structure:

{
    'title': 'Paper title',
    'abstract': 'Paper abstract',
    'latex': 'Main paper content in LaTeX format',
    'motivation': 'Research motivation',
    'idea': 'Main research idea',
    'Experimental_Setup': 'Experiment configuration (JSON/text)',
    'Experimental_results': 'Results and findings (JSON/text)',
    'generated_text': 'Complete raw generated text'
}

Training and Evaluation Datasets

  • Research-8k: Contains 12,696 training and 802 test samples
  • Review-5k: Contains 4,970 papers with over 16,000 reviewer comments

To request access to these datasets, please contact [email protected].

License

The code is released under the Apache 2.0 license. Use of the models is subject to the CycleResearcher-License agreement.

Citation

@inproceedings{cycleresearcher2024,
  title={CycleResearcher: Improving Automated Research via Automated Review},
  author={Anonymous Authors},
  booktitle={International Conference on Learning Representations},
  year={2025}
}

Contact

For questions and feedback, please:


Note: This is a research preview release. Features and capabilities may be updated frequently.

Downloads last month
0
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for WestlakeNLP/CycleResearcher-72B-Original

Base model

Qwen/Qwen2.5-72B
Finetuned
(35)
this model

Dataset used to train WestlakeNLP/CycleResearcher-72B-Original

Collection including WestlakeNLP/CycleResearcher-72B-Original