Abstract
Large language models are used to automatically revise scientific papers based on citation-predictive rubrics while preserving core content, achieving improved citation predictions and human evaluator preference.
Scientific discoveries must be communicated clearly to realize their full potential. Without effective communication, even the most groundbreaking findings risk being overlooked or misunderstood. The primary way scientists communicate their work and receive feedback from the community is through peer review. However, the current system often provides inconsistent feedback between reviewers, ultimately hindering the improvement of a manuscript and limiting its potential impact. In this paper, we introduce a novel method APRES powered by Large Language Models (LLMs) to update a scientific papers text based on an evaluation rubric. Our automated method discovers a rubric that is highly predictive of future citation counts, and integrate it with APRES in an automated system that revises papers to enhance their quality and impact. Crucially, this objective should be met without altering the core scientific content. We demonstrate the success of APRES, which improves future citation prediction by 19.6% in mean averaged error over the next best baseline, and show that our paper revision process yields papers that are preferred over the originals by human expert evaluators 79% of the time. Our findings provide strong empirical support for using LLMs as a tool to help authors stress-test their manuscripts before submission. Ultimately, our work seeks to augment, not replace, the essential role of human expert reviewers, for it should be humans who discern which discoveries truly matter, guiding science toward advancing knowledge and enriching lives.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance (2026)
- EchoReview: Learning Peer Review from the Echoes of Scientific Citations (2026)
- ScholarPeer: A Context-Aware Multi-Agent Framework for Automated Peer Review (2026)
- PaperAudit-Bench: Benchmarking Error Detection in Research Papers for Critical Automated Peer Review (2026)
- DRPG (Decompose, Retrieve, Plan, Generate): An Agentic Framework for Academic Rebuttal (2026)
- ReportLogic: Evaluating Logical Quality in Deep Research Reports (2026)
- Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper