--- license: apache-2.0 task_categories: - text-generation language: - en tags: - survey size_categories: - n<1K extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects." extra_gated_fields: Company/Organization: text Country: country --- # 🧠 SurveyScope [![GitHub](https://img.shields.io/badge/code-GitHub-black?logo=github)](https://github.com/FlagOpen/SciSage) [![Dataset](https://img.shields.io/badge/dataset-HuggingFace-blue?logo=huggingface)](https://huggingface.co/datasets/BAAI/SurveyScope) [![Paper](https://img.shields.io/badge/paper-arXiv-red?logo=arxiv)](https://arxiv.org/abs/2506.12689) --- ## 🎉 News - ✅ **2025.06.16** — We release the paper: [**SciSage: A Multi-Agent Framework for High-Quality Scientific Survey Generation**](https://arxiv.org/abs/2506.12689) → GitHub: [FlagOpen/SciSage](https://github.com/FlagOpen/SciSage) --- ## 📚 Overview **SurveyScope** is a high-quality benchmark tailored for evaluating the content quality of scientific surveys generated by the **SciSage** framework. It provides reliable reference material, diverse topic coverage, and human-curated citation data. --- ## 🏗️ Dataset Construction The construction pipeline of SurveyScope is illustrated in **Figure 1** and includes the following key stages: - **Domain Identification from Existing Benchmarks** We began by mining open-source academic benchmarks and identifying covered domains using Qwen3-32B with structured prompting. - **Topic Augmentation with Expert & LLM Input** To ensure domain completeness, we incorporated suggestions from domain experts and LLMs, filling topic gaps and addressing underrepresented fields. - **Paper Selection per Domain** For each domain, we manually selected highly cited and recent papers from Google Scholar to ensure high quality and recency.
SurveyScope pipeline
Figure 1: Overview of the SurveyScope construction pipeline.
--- ## Dataset Details | Category | Research Topic | Paper Title | citation num (250605) | year | url | token num (qwen2.5) | | --- | --- | --- | --- | --- | --- | --- | | NLP | Speech-to-text Translation | Recent Advances in Direct Speech-to-text Translation | 26 | 2023 | http://arxiv.org/abs/2306.11646 | 17,611 | | NLP | Contrastive Pretraining in Language Processing | A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives | 103 | 2023 | http://arxiv.org/abs/2102.12982v1 | 18,920 | | Dialogue Systems | Task-oriented Dialogue Systems | End-to-end Task-oriented Dialogue: A Survey of Tasks, Methods, and Future Directions | 22 | 2023 | http://arxiv.org/abs/2311.09008v1 | 36,991 | | Benchmarking / Evaluation | Question Answering Datasets and Benchmarks | Modern Question Answering Datasets and Benchmarks: A Survey | 34 | 2022 | http://arxiv.org/abs/2206.15030v1 | 20,066 | | NLP | Reasoning Shortcuts in MRC | A Survey on Measuring and Mitigating Reasoning Shortcuts in Machine Reading Comprehension | 10 | 2022 | http://arxiv.org/abs/2209.01824v2 | 31,808 | | LLMs (General) | Confidence Estimation in LLMs | A Survey of Confidence Estimation and Calibration in Large Language Models | 75 | 2023 | http://arxiv.org/abs/2311.08298v2 | 31,777 | | LLMs (General) | Controllable Text Generation | A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models | 402 | 2023 | http://arxiv.org/abs/2201.05337v5 | 56,627 | | NLP | Robustness in NLP Models | Measure and Improve Robustness in NLP Models: A Survey | 143 | 2021 | http://arxiv.org/abs/2112.08313v2 | 39,066 | | NLP | Neural Entity Linking | Neural Entity Linking: A Survey of Models Based on Deep Learning | 204 | 2022 | http://arxiv.org/abs/2006.00575v4 | 108,546 | | NLP | Non-Autoregressive Generation in NMT | A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond | 110 | 2023 | http://arxiv.org/abs/2204.09269v2 | 77,863 | | LLMs Safety | Bias and Fairness in LLMs | Bias and Fairness in Large Language Models: A Survey | 705 | 2024 | http://arxiv.org/abs/2309.00770v3 | 110,790 | | LLMs Efficiency | NLP Efficiency | Efficient Methods for Natural Language Processing: A Survey | 134 | 2023 | http://arxiv.org/abs/2209.00099v2 | 63,709 | | LLMs Efficiency | LLM Efficiency | The Efficiency Spectrum of Large Language Models: An Algorithmic Survey | 27 | 2023 | http://arxiv.org/abs/2312.00678v2 | 70,382 | | Medical / Biomedical | Biomedical Language Models | Pre-trained Language Models in Biomedical Domain: A Systematic Survey | 213 | 2023 | http://arxiv.org/abs/2110.05006v4 | 103,620 | | NLP | Code-Switching in NLP | The Decades Progress on Code-Switching Research in NLP: A Systematic Survey on Trends and Challenges | 56 | 2022 | http://arxiv.org/abs/2212.09660v2 | 93,129 | | Dialogue Systems | Proactive Dialogue Systems | A Survey on Proactive Dialogue Systems: Problems, Methods, and Prospects | 56 | 2023 | http://arxiv.org/abs/2305.02750v2 | 19,064 | | Dialogue Systems | Reinforcement Learning in Dialogue Policy | A Survey on Recent Advances and Challenges in Reinforcement Learning Methods for Task-Oriented Dialogue Policy Learning | 49 | 2023 | http://arxiv.org/abs/2202.13675v2 | 27,542 | | NLP | Contextualized Language Models in Machine Reading Comprehension | Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond | 78 | 2020 | http://arxiv.org/abs/2005.06249v1 | 71,397 | | NLP | Explainability in Machine Reading Comprehension | A Survey on Explainability in Machine Reading Comprehension | 51 | 2020 | http://arxiv.org/abs/2010.00389v1 | 26,035 | | LLMs (General) | Chain of Thought Reasoning in LLMs | Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future | 228 | 2023 | http://arxiv.org/abs/2309.15402v3 | 59,776 | | LLMs (General) | In-context Learning in LLMs | A Survey on In-context Learning | 1,892 | 2022 | https://arxiv.org/abs/2301.00234 | 35,769 | | Finance / Domain-specific | LLMs in Recommendation Systems | A Survey on Large Language Models for Recommendation | 449 | 2024 | https://arxiv.org/abs/2305.19860 | 22,986 | | LLMs Safety | LLM-Generated Content Detection | A Survey on Detection of LLMs-Generated Content | 57 | 2023 | https://arxiv.org/abs/2310.15654 | 41,035 | | Medical / Biomedical | LLMs in Medical Applications | A Survey of Large Language Models in Medicine: Progress, Application, and Challenge | 158 | 2023 | https://arxiv.org/abs/2311.05112 | 96,881 | | LLMs Safety | LLM Safety | Towards Safer Generative Language Models: A Survey on Safety Risks, Evaluations, and Improvements | 10 | 2023 | https://arxiv.org/abs/2302.09270 | 28,890 | | LLMs Safety | Hallucination in LLMs | A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions | 1,557 | 2025 | https://arxiv.org/abs/2311.05232 | 92,219 | | LLMs Safety | LLM Full Stack Safety | A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment | 13 | 2025 | https://arxiv.org/abs/2504.15585 | 161,502 | | Other | LLM-based Autonomous Agents | A Survey on Large Language Model based Autonomous Agents | 1,446 | 2025 | https://arxiv.org/abs/2308.11432 | 55,603 | | LLMs (General) | LLM Reasoning | Reasoning with Large Language Models, a Survey | 82 | 2024 | https://arxiv.org/abs/2407.11511 | 44,429 | | Multimodal | Vision-Language Models in Vision Tasks | Vision-Language Models for Vision Tasks: A Survey | 696 | 2024 | https://arxiv.org/abs/2304.00685 | 75,611 | | LLMs (General) | LLM Alignment Techniques | A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More | 24 | 2024 | https://arxiv.org/abs/2407.16216 | 73,556 | | Robotics | Deep Reinforcement Learning in Robotics | Deep Reinforcement Learning for Robotics: A Survey of Real-World Successes | 67 | 2025 | https://arxiv.org/abs/2408.03539 | 102,954 | | LLMs Safety | Hallucination in LVMs | A Survey on Hallucination in Large Vision-Language Models | 216 | 2024 | https://arxiv.org/abs/2402.00253 | 17,647 | | LLMs Safety | LLM Security and Privacy | A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly | 870 | 2024 | https://arxiv.org/abs/2312.02003 | 47,825 | | Medical / Biomedical | Medical LLMs, Trustworthiness in LLMs | A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions | 43 | 2024 | https://arxiv.org/abs/2406.03712 | 61,934 | | Benchmarking / Evaluation | LLM Evaluation Methods | A Survey on LLM-as-a-Judge | 163 | 2024 | https://arxiv.org/abs/2411.15594 | 48,451 | | Finance / Domain-specific | LLMs in Finance Applications | Revolutionizing Finance with LLMs: An Overview of Applications and Insights | 135 | 2024 | https://arxiv.org/abs/2401.11641 | 29,116 | | LLMs (General) | Retrieval-Augmented Generation | Retrieval-Augmented Generation for Large Language Models: A Survey | 2,184 | 2023 | https://arxiv.org/abs/2312.10997 | 9,966 | | LLMs (General) | Mixture of Experts in LLMs | A Survey on Mixture of Experts in Large Language Models | 138 | 2023 | https://arxiv.org/abs/2407.06204 | 83,623 | | LLMs (General) | Multilingual LLMs | Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers | 81 | 2024 | https://arxiv.org/abs/2404.04925 | 81,148 | | Other | Continual Learning in AI | A Comprehensive Survey of Continual Learning: Theory, Method and Application | 1,025 | 2024 | https://arxiv.org/pdf/2302.00487 | 109,971 | | LLMs Efficiency | Parameter-Efficient Fine-Tuning | Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey | 479 | 2024 | https://arxiv.org/abs/2403.14608 | 61,858 | | Multimodal | Multimodal Reasoning in MLLMs | Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning | 44 | 2024 | https://arxiv.org/abs/2401.06805 | 49,225 | | Robotics | LLMs in Robotics | Large Language Models for Robotics: A Survey | 160 | 2024 | https://arxiv.org/abs/2311.07226 | 37,682 | | Multimodal | Vision-Language-Action Models in Embodied AI | A Survey on Vision-Language-Action Models for Embodied AI | 77 | 2024 | https://arxiv.org/abs/2405.14093 | 93,748 | | LLMs Safety | Red Teaming for Generative Models | Against The Achilles' Heel: A Survey on Red Teaming for Generative Models | 22 | 2025 | https://arxiv.org/abs/2404.00629 | 97,190 | ## 📊 Dataset Statistics SurveyScope emphasizes **coverage**, **recency**, and **impact**, setting it apart from prior benchmarks. Below is a high-level summary: - **📌 Diverse Topics** 11 active research areas, including NLP, LLMs, AI safety, robotics, and multimodal learning.
Topic distribution
Distribution of topics in SurveyScope.
- **🕒 Recent Publications** Focused on 2020–2025 publications to reflect the latest developments, especially in LLMs post-2022.
Publication years
Publication year distribution.
- **📈 High Citation Impact** Average: 322 citations/paper; 52% exceed 100 citations.
Citation distribution
Citation distribution in SurveyScope.
--- ## 📐 Evaluation Results We evaluated **SciSage** against strong baselines: - [OpenScholar](https://github.com/AkariAsai/OpenScholar) - [AutoSurvey](https://github.com/AutoSurveys/AutoSurvey) - [LLM × MapReduce-V2](https://github.com/thunlp/LLMxMapReduce) The evaluation covers content quality, structural coherence, and citation fidelity.
Automatic evaluation
Automatic evaluation metrics across systems.
--- ## 📎 Citation If you find SurveyScope useful, please cite: ```bibtex @misc{shi2025scisagemultiagentframeworkhighquality, title={SciSage: A Multi-Agent Framework for High-Quality Scientific Survey Generation}, author={Xiaofeng Shi and Qian Kou and Yuduo Li and Ning Tang and Jinxin Xie and Longbin Yu and Songjing Wang and Hua Zhou}, year={2025}, eprint={2506.12689}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2506.12689}, }