Towards A Sustainable Future for Peer Review in Software Engineering
- URL: http://arxiv.org/abs/2601.21761v1
- Date: Thu, 29 Jan 2026 14:14:44 GMT
- Title: Towards A Sustainable Future for Peer Review in Software Engineering
- Authors: Esteban Parra, Sonia Haiduc, Preetha Chatterjee, Ramtin Ehsani, Polina Iaremchuk,
- Abstract summary: The rapid growth of paper submissions in software engineering venues has outpaced the availability of qualified reviewers.<n>Our vision of the Future of the SE research landscape involves a more scalable, inclusive, and resilient peer review process.
- Score: 5.42073906150267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Peer review is the main mechanism by which the software engineering community assesses the quality of scientific results. However, the rapid growth of paper submissions in software engineering venues has outpaced the availability of qualified reviewers, creating a growing imbalance that risks constraining and negatively impacting the long-term growth of the Software Engineering (SE) research community. Our vision of the Future of the SE research landscape involves a more scalable, inclusive, and resilient peer review process that incorporates additional mechanisms for: 1) attracting and training newcomers to serve as high-quality reviewers, 2) incentivizing more community members to serve as peer reviewers, and 3) cautiously integrating AI tools to support a high-quality review process.
Related papers
- The Story is Not the Science: Execution-Grounded Evaluation of Mechanistic Interpretability Research [56.80927148740585]
We address the challenges of scalability and rigor by flipping the dynamic and developing AI agents as research evaluators.<n>We use mechanistic interpretability research as a testbed, build standardized research output, and develop MechEvalAgent.<n>Our work demonstrates the potential of AI agents to transform research evaluation and pave the way for rigorous scientific practices.
arXiv Detail & Related papers (2026-02-05T19:00:02Z) - Reimagining Peer Review Process Through Multi-Agent Mechanism Design [2.5782420501870296]
The software engineering research community faces a systemic crisis: peer review is failing under growing submissions, misaligned incentives, and reviewer fatigue.<n>This position paper argues that these dysfunctions are mechanism design failures to computational solutions.<n>We propose three interventions: a credit-based submission economy, MARL-optimized reviewer assignment, and hybrid verification of consistency.
arXiv Detail & Related papers (2026-01-27T16:43:11Z) - The Competence Crisis: A Design Fiction on AI-Assisted Research in Software Engineering [1.7892096882914865]
Rising publication pressure and the routine use of generative AI tools are reshaping how software engineering research is produced, assessed, and taught.<n>This vision paper employs Design Fiction as a methodological lens to examine how such concerns might materialise if current practices persist.
arXiv Detail & Related papers (2026-01-27T14:07:19Z) - Peer Code Review in Research Software Development: The Research Software Engineer Perspective [0.6385006149689549]
While peer code review can improve software quality, its adoption by research software engineers (RSEs) remains unexplored.<n>This study explores RSE perspectives on peer code review, focusing on their practices, challenges, and potential improvements.
arXiv Detail & Related papers (2025-11-13T20:07:10Z) - Triage in Software Engineering: A Systematic Review of Research and Practice [18.03124877437556]
Triage aims to efficiently prioritize, assign, and assess issues to ensure the reliability of complex environments.<n>The vast amount of heterogeneous data generated by software systems has made effective triage indispensable.<n>This survey provides a comprehensive review of 234 papers from 2004 to the present, offering an in-depth examination of the fundamental concepts, system architecture, and problem statement.
arXiv Detail & Related papers (2025-11-05T02:42:26Z) - Identity Theft in AI Conference Peer Review [50.18240135317708]
We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research.<n>We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations.
arXiv Detail & Related papers (2025-08-06T02:36:52Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Thinking Longer, Not Larger: Enhancing Software Engineering Agents via Scaling Test-Time Compute [61.00662702026523]
We propose a unified Test-Time Compute scaling framework that leverages increased inference-time instead of larger models.<n>Our framework incorporates two complementary strategies: internal TTC and external TTC.<n>We demonstrate our textbf32B model achieves a 46% issue resolution rate, surpassing significantly larger models such as DeepSeek R1 671B and OpenAI o1.
arXiv Detail & Related papers (2025-03-31T07:31:32Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.