ACE: Automated Technical Debt Remediation with Validated Large Language Model Refactorings
- URL: http://arxiv.org/abs/2507.03536v1
- Date: Fri, 04 Jul 2025 12:39:27 GMT
- Title: ACE: Automated Technical Debt Remediation with Validated Large Language Model Refactorings
- Authors: Adam Tornhill, Markus Borg, Nadim Hagatulah, Emma Söderberg,
- Abstract summary: This paper introduces Augmented Code Engineering (ACE), a tool that automates code improvements using validated output.<n>Early feedback from users suggests that AI-enabled helps mitigate code-level technical debt that otherwise rarely gets acted upon.
- Score: 8.0322025529523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The remarkable advances in AI and Large Language Models (LLMs) have enabled machines to write code, accelerating the growth of software systems. However, the bottleneck in software development is not writing code but understanding it; program understanding is the dominant activity, consuming approximately 70% of developers' time. This implies that improving existing code to make it easier to understand has a high payoff and - in the age of AI-assisted coding - is an essential activity to ensure that a limited pool of developers can keep up with ever-growing codebases. This paper introduces Augmented Code Engineering (ACE), a tool that automates code improvements using validated LLM output. Developed through a data-driven approach, ACE provides reliable refactoring suggestions by considering both objective code quality improvements and program correctness. Early feedback from users suggests that AI-enabled refactoring helps mitigate code-level technical debt that otherwise rarely gets acted upon.
Related papers
- Training Language Models to Generate Quality Code with Program Analysis Feedback [66.0854002147103]
Code generation with large language models (LLMs) is increasingly adopted in production but fails to ensure code quality.<n>We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code.
arXiv Detail & Related papers (2025-05-28T17:57:47Z) - Identification and Optimization of Redundant Code Using Large Language Models [0.0]
Redundant code is a persistent challenge in software development that makes systems harder to maintain, scale, and update.<n>This research aims to identify recurring patterns of redundancy and analyze their underlying causes, such as outdated practices or insufficient awareness of best coding principles.
arXiv Detail & Related papers (2025-05-07T00:44:32Z) - Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs [53.00384299879513]
In large language models (LLMs), code and reasoning reinforce each other.<n>Code provides verifiable execution paths, enforces logical decomposition, and enables runtime validation.<n>We identify key challenges and propose future research directions to strengthen this synergy.
arXiv Detail & Related papers (2025-02-26T18:55:42Z) - Bridging LLM-Generated Code and Requirements: Reverse Generation technique and SBC Metric for Developer Insights [0.0]
This paper introduces a novel scoring mechanism called the SBC score.<n>It is based on a reverse generation technique that leverages the natural language generation capabilities of Large Language Models.<n>Unlike direct code analysis, our approach reconstructs system requirements from AI-generated code and compares them with the original specifications.
arXiv Detail & Related papers (2025-02-11T01:12:11Z) - Leveraging Large Language Models for Code Translation and Software Development in Scientific Computing [0.9668407688201359]
generative artificial intelligence (GenAI) is poised to transform productivity in scientific computing.<n>We developed a tool, CodeScribe, which combines prompt engineering with user supervision to establish an efficient process for code conversion.<n>We also address the challenges of AI-driven code translation and highlight its benefits for enhancing productivity in scientific computing.
arXiv Detail & Related papers (2024-10-31T16:48:41Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Investigating the Transferability of Code Repair for Low-Resource Programming Languages [57.62712191540067]
Large language models (LLMs) have shown remarkable performance on code generation tasks.
Recent works augment the code repair process by integrating modern techniques such as chain-of-thought reasoning or distillation.
We investigate the benefits of distilling code repair for both high and low resource languages.
arXiv Detail & Related papers (2024-06-21T05:05:39Z) - AutoCodeRover: Autonomous Program Improvement [8.66280420062806]
We propose an automated approach for solving GitHub issues to autonomously achieve program improvement.
In our approach called AutoCodeRover, LLMs are combined with sophisticated code search capabilities, ultimately leading to a program modification or patch.
Experiments on SWE-bench-lite (300 real-life GitHub issues) show increased efficacy in solving GitHub issues (19% on SWE-bench-lite), which is higher than the efficacy of the recently reported SWE-agent.
arXiv Detail & Related papers (2024-04-08T11:55:09Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Chatbots As Fluent Polyglots: Revisiting Breakthrough Code Snippets [0.0]
The research applies AI-driven code assistants to analyze a selection of influential computer code that has shaped modern technology.
The original contribution of this study was to examine half of the most significant code advances in the last 50 years.
arXiv Detail & Related papers (2023-01-05T23:17:17Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.