Code Improvement Practices at Meta
- URL: http://arxiv.org/abs/2504.12517v2
- Date: Wed, 23 Apr 2025 15:58:21 GMT
- Title: Code Improvement Practices at Meta
- Authors: Audris Mockus, Peter C Rigby, Rui Abreu, Anatoly Akkerman, Yogesh Bhootada, Payal Bhuptani, Gurnit Ghardhora, Lan Hoang Dao, Chris Hawley, Renzhi He, Sagar Krishnamoorthy, Sergei Krauze, Jianmin Li, Anton Lunov, Dragos Martac, Francois Morin, Neil Mitchell, Venus Montes, Maher Saba, Matt Steiner, Andrea Valori, Shanchao Wang, Nachiappan Nagappan,
- Abstract summary: We investigate Meta's practices by collaborating with engineers on code quality.<n>We analyze rich source code change history to reveal a range of practices used for continual improvement of the.<n>Our analysis of the impact of reengineering activities revealed substantial improvements in quality and speed.
- Score: 11.3591598115242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The focus on rapid software delivery inevitably results in the accumulation of technical debt, which, in turn, affects quality and slows future development. Yet, companies with a long history of rapid delivery exist. Our primary aim is to discover how such companies manage to keep their codebases maintainable. Method: we investigate Meta's practices by collaborating with engineers on code quality and by analyzing rich source code change history to reveal a range of practices used for continual improvement of the codebase. In addition, we replicate several aspects of previous industry cases studies investigating the impact of code reengineering. Results: Code improvements at Meta range from completely organic grass-roots done at the initiative of individual engineers, to regularly blocked time and engagement via gamification of Better Engineering (BE) work, to major explicit initiatives aimed at reengineering the complex parts of the codebase or deleting accumulations of dead code. Over 14% of changes are explicitly devoted to code improvement and the developers are given ``badges'' to acknowledge the type of work and the amount of effort. Our investigation to prioritize which parts of the codebase to improve lead to the development of metrics to guide this decision making. Our analysis of the impact of reengineering activities revealed substantial improvements in quality and speed as well as a reduction in code complexity. Overall, such continual improvement is an effective way to develop software with rapid releases, while maintaining high quality.
Related papers
- Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs [53.00384299879513]
In large language models (LLMs), code and reasoning reinforce each other.<n>Code provides verifiable execution paths, enforces logical decomposition, and enables runtime validation.<n>We identify key challenges and propose future research directions to strengthen this synergy.
arXiv Detail & Related papers (2025-02-26T18:55:42Z) - LLMs as Continuous Learners: Improving the Reproduction of Defective Code in Software Issues [62.12404317786005]
EvoCoder is a continuous learning framework for issue code reproduction.
Our results show a 20% improvement in issue reproduction rates over existing SOTA methods.
arXiv Detail & Related papers (2024-11-21T08:49:23Z) - Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - A Study on Developer Behaviors for Validating and Repairing LLM-Generated Code Using Eye Tracking and IDE Actions [13.58143103712]
GitHub Copilot is a large language model (LLM)-powered code generation tool.
This paper investigates how developers validate and repair code generated by Copilot.
Being aware of the code's provenance led to improved performance, increased search efforts, more frequent Copilot usage, and higher cognitive workload.
arXiv Detail & Related papers (2024-05-25T06:20:01Z) - AI-powered Code Review with LLMs: Early Results [10.37036924997437]
We present a novel approach to improving software quality and efficiency through a Large Language Model (LLM)-based model.
Our proposed LLM-based AI agent model is trained on large code repositories.
It aims to detect code smells, identify potential bugs, provide suggestions for improvement, and optimize the code.
arXiv Detail & Related papers (2024-04-29T08:27:50Z) - Code Revert Prediction with Graph Neural Networks: A Case Study at J.P. Morgan Chase [10.961209762486684]
Code revert prediction aims to forecast or predict the likelihood of code changes being reverted or rolled back in software development.
Previous methods for code defect detection relied on independent features but ignored relationships between code scripts.
This paper presents a systematic empirical study for code revert prediction that integrates the code import graph with code features.
arXiv Detail & Related papers (2024-03-14T15:54:29Z) - Does Code Review Speed Matter for Practitioners? [0.0]
Increasing code velocity is a common goal for a variety of software projects.
We conducted a survey to study the code velocity-related beliefs and practices in place.
arXiv Detail & Related papers (2023-11-04T19:22:23Z) - Learning to Improve Code Efficiency [27.768476489523163]
We analyze a large competitive programming dataset from the Google Code Jam competition.
We find that efficient code is indeed rare, with a 2x difference between the median runtime and the 90th percentile of solutions.
We propose using machine learning to automatically provide prescriptive feedback in the form of hints, to guide programmers towards writing high-performance code.
arXiv Detail & Related papers (2022-08-09T01:28:30Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.