On the Use of Agentic Coding: An Empirical Study of Pull Requests on GitHub
Abstract
Agent-assisted pull requests generated by Claude Code are largely accepted in open-source projects, with most requiring minimal human modification.
Large language models (LLMs) are increasingly being integrated into software development processes. The ability to generate code and submit pull requests with minimal human intervention, through the use of autonomous AI agents, is poised to become a standard practice. However, little is known about the practical usefulness of these pull requests and the extent to which their contributions are accepted in real-world projects. In this paper, we empirically study 567 GitHub pull requests (PRs) generated using Claude Code, an agentic coding tool, across 157 diverse open-source projects. Our analysis reveals that developers tend to rely on agents for tasks such as refactoring, documentation, and testing. The results indicate that 83.8% of these agent-assisted PRs are eventually accepted and merged by project maintainers, with 54.9% of the merged PRs are integrated without further modification. The remaining 45.1% require additional changes benefit from human revisions, especially for bug fixes, documentation, and adherence to project-specific standards. These findings suggest that while agent-assisted PRs are largely acceptable, they still benefit from human oversight and refinement.
Community
🚀 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗣𝗥𝘀 𝗮𝗿𝗲 𝘀𝗵𝗶𝗽𝗽𝗶𝗻𝗴 – 𝟴𝟯.𝟴% 𝗺𝗲𝗿𝗴𝗲 𝗿𝗮𝘁𝗲 🚀
Not a demo, not a toy. We study Claude Code PRs on GitHub, agentic PRs are merged 83.8% of the time vs 91.0% for humans, with similar median merge speeds (1.23 hrs vs 1.04 hrs).
• 𝗧𝗵𝗲 𝘀𝘁𝗼𝗿𝘆 𝗼𝗻 𝘁𝗵𝗲 𝗴𝗿𝗼𝘂𝗻𝗱: agents accelerate setup and routine improvements; humans carry context, enforce quality, and keep scope tight.
• 𝗪𝗵𝗮𝘁 𝗮𝗴𝗲𝗻𝘁𝘀 𝗱𝗼 𝗺𝗼𝗿𝗲: refactoring, tests, and docs.
• 𝗪𝗵𝘆 𝗿𝗲𝗷𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗵𝗮𝗽𝗽𝗲𝗻: alternative solutions, oversized PRs, or obsolescence – not simply “bad AI code”.
• 𝗪𝗵𝗮𝘁 𝗿𝗲𝘃𝗶𝗲𝘄𝗲𝗿𝘀 𝘀𝘁𝗶𝗹𝗹 𝗳𝗶𝘅: bugs (45.1%), docs (27.4%), refactoring (25.7%), style (22.1%) before merge.
• 𝗪𝗵𝗲𝗿𝗲 𝘁𝗵𝗲𝘆 𝘀𝘁𝗶𝗹𝗹 𝘀𝘁𝘂𝗺𝗯𝗹𝗲: legacy-heavy codebases or cross-cutting PRs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Impact of Large Language Models (LLMs) on Code Review Process (2025)
- Does AI Code Review Lead to Code Changes? A Case Study of GitHub Actions (2025)
- What Were You Thinking? An LLM-Driven Large-Scale Study of Refactoring Motivations in Open-Source Projects (2025)
- On the Use of Agentic Coding Manifests: An Empirical Study of Claude Code (2025)
- AutoCodeSherpa: Symbolic Explanations in AI Coding Agents (2025)
- Benchmarking and Studying the LLM-based Code Review (2025)
- An Empirical Study on the Amount of Changes Required for Merge Request Acceptance (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper