Ten Simple Rules for AI-Assisted Coding in Science
- URL: http://arxiv.org/abs/2510.22254v2
- Date: Fri, 31 Oct 2025 06:51:11 GMT
- Title: Ten Simple Rules for AI-Assisted Coding in Science
- Authors: Eric W. Bridgeford, Iain Campbell, Zijao Chen, Zhicheng Lin, Harrison Ritz, Joachim Vandekerckhove, Russell A. Poldrack,
- Abstract summary: We provide ten practical rules for AI-assisted coding that balance leveraging capabilities of AI with maintaining scientific and methodological rigor.<n>These principles serve to emphasize maintaining human agency in coding decisions, establishing robust validation procedures, and preserving the domain expertise essential for methodologically sound research.
- Score: 2.6232508912640937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While AI coding tools have demonstrated potential to accelerate software development, their use in scientific computing raises critical questions about code quality and scientific validity. In this paper, we provide ten practical rules for AI-assisted coding that balance leveraging capabilities of AI with maintaining scientific and methodological rigor. We address how AI can be leveraged strategically throughout the development cycle with four key themes: problem preparation and understanding, managing context and interaction, testing and validation, and code quality assurance and iterative improvement. These principles serve to emphasize maintaining human agency in coding decisions, establishing robust validation procedures, and preserving the domain expertise essential for methodologically sound research. These rules are intended to help researchers harness AI's transformative potential for faster software development while ensuring that their code meets the standards of reliability, reproducibility, and scientific validity that research integrity demands.
Related papers
- Bridging Ethical Principles and Algorithmic Methods: An Alternative Approach for Assessing Trustworthiness in AI Systems [0.0]
This paper introduces an assessment method that combines the ethical components of Trustworthy AI with the algorithmic processes of PageRank and TrustRank.<n>The goal is to establish an assessment framework that minimizes the subjectivity inherent in the self-assessment techniques prevalent in the field.
arXiv Detail & Related papers (2025-06-28T06:27:30Z) - Software Fairness Testing in Practice [0.21427777919040417]
This study investigates how software professionals test AI-powered systems for fairness through interviews with 22 practitioners working on AI and ML projects.<n>Our findings highlight a significant gap between theoretical fairness concepts and industry practice.<n>Key challenges include data quality and diversity, time constraints, defining effective metrics, and ensuring model interoperability.
arXiv Detail & Related papers (2025-06-20T16:03:02Z) - Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor [83.99510317617694]
We argue that a broader conception of what rigorous AI research and practice should entail is needed.<n>We aim to provide useful language and a framework for much-needed dialogue about the AI community's work.
arXiv Detail & Related papers (2025-06-17T15:44:41Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Enhancing Trust in Language Model-Based Code Optimization through RLHF: A Research Design [0.0]
This research aims to develop reliable, LM-powered methods for code optimization that effectively integrate human feedback.<n>This work aligns with the broader objectives of advancing cooperative and human-centric aspects of software engineering.
arXiv Detail & Related papers (2025-02-10T18:48:45Z) - The why, what, and how of AI-based coding in scientific research [0.0]
Generative AI, particularly large language models (LLMs), has the potential to transform coding into intuitive conversations.
We dissect AI-based coding through three key lenses.
We address the limitations and future outlook of AI in coding.
arXiv Detail & Related papers (2024-10-03T02:36:30Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Comparing Software Developers with ChatGPT: An Empirical Investigation [0.0]
This paper conducts an empirical investigation, contrasting the performance of software engineers and AI systems, like ChatGPT, across different evaluation metrics.
The paper posits that a comprehensive comparison of software engineers and AI-based solutions, considering various evaluation criteria, is pivotal in fostering human-machine collaboration.
arXiv Detail & Related papers (2023-05-19T17:25:54Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.