Assessing the Impact of Refactoring Energy-Inefficient Code Patterns on Software Sustainability: An Industry Case Study
- URL: http://arxiv.org/abs/2506.09370v1
- Date: Wed, 11 Jun 2025 03:34:45 GMT
- Title: Assessing the Impact of Refactoring Energy-Inefficient Code Patterns on Software Sustainability: An Industry Case Study
- Authors: Rohit Mehra, Priyavanshi Pathania, Vibhu Saujanya Sharma, Vikrant Kaulgud, Sanjay Podder, Adam P. Burden,
- Abstract summary: We present an industry case study that evaluates the sustainability impact of energy-inefficient code patterns identified by automated software sustainability assessment tools.<n>Preliminary results highlight a positive impact on the application's sustainability post-refactoring, leading to a 29% decrease in per-user per-month energy consumption.
- Score: 9.521952718902973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in technologies like artificial intelligence and metaverse have led to a proliferation of software systems in business and everyday life. With this widespread penetration, the carbon emissions of software are rapidly growing as well, thereby negatively impacting the long-term sustainability of our environment. Hence, optimizing software from a sustainability standpoint becomes more crucial than ever. We believe that the adoption of automated tools that can identify energy-inefficient patterns in the code and guide appropriate refactoring can significantly assist in this optimization. In this extended abstract, we present an industry case study that evaluates the sustainability impact of refactoring energy-inefficient code patterns identified by automated software sustainability assessment tools for a large application. Preliminary results highlight a positive impact on the application's sustainability post-refactoring, leading to a 29% decrease in per-user per-month energy consumption.
Related papers
- Towards a Knowledge Base of Common Sustainability Weaknesses in Green Software Development [9.521952718902973]
In this paper, we motivate the need for the development of a standard knowledge base of commonly occurring sustainability weaknesses in code.<n>We demonstrate why existing knowledge regarding software weaknesses cannot be re-tagged "as is" to sustainability without significant due diligence.
arXiv Detail & Related papers (2025-06-10T14:03:58Z) - ORMind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research [53.736407871322314]
We introduce ORMind, a cognitive-inspired framework that enhances optimization through counterfactual reasoning.<n>Our approach emulates human cognition, implementing an end-to-end workflow that transforms requirements into mathematical models and executable code.<n>It is currently being tested internally in Lenovo's AI Assistant, with plans to enhance optimization capabilities for both business and consumer customers.
arXiv Detail & Related papers (2025-06-02T05:11:21Z) - Towards Effective Code-Integrated Reasoning [89.47213509714578]
We investigate code-integrated reasoning, where models generate code when necessary and integrate feedback by executing it through a code interpreter.<n>Tool-augmented reinforcement learning can still suffer from potential instability in the learning dynamics.<n>We develop enhanced training strategies that balance exploration and stability, progressively building tool-use capabilities while improving reasoning performance.
arXiv Detail & Related papers (2025-05-30T11:30:18Z) - Optimizing Token Consumption in LLMs: A Nano Surge Approach for Code Reasoning Efficiency [5.044393644778693]
Chain of Thought (CoT) reasoning has become an essential approach for automated code repair.<n>CoT leads to substantial increases in token consumption, reducing inference efficiency and raising computational costs.<n>This paper proposes three targeted optimization strategies: Context Awareness, Responsibility Tuning, and Cost Sensitive.
arXiv Detail & Related papers (2025-04-22T15:51:00Z) - Carbon Footprint Evaluation of Code Generation through LLM as a Service [5.782162597806532]
Green coding and claims that AI models can improve energy efficiency have grown in popularity.<n>We present an overview of green coding and metrics to measure AI model sustainability awareness.
arXiv Detail & Related papers (2025-03-30T15:27:04Z) - The Dual-use Dilemma in LLMs: Do Empowering Ethical Capacities Make a Degraded Utility? [54.18519360412294]
Large Language Models (LLMs) must balance between rejecting harmful requests for safety and accommodating legitimate ones for utility.<n>This paper presents a Direct Preference Optimization (DPO) based alignment framework that achieves better overall performance.<n>We analyze experimental results obtained from testing DeepSeek-R1 on our benchmark and reveal the critical ethical concerns raised by this highly acclaimed model.
arXiv Detail & Related papers (2025-01-20T06:35:01Z) - Addressing the sustainable AI trilemma: a case study on LLM agents and RAG [7.6212949300713015]
Large language models (LLMs) have demonstrated significant capabilities, but their widespread deployment and more advanced applications raise critical sustainability challenges.<n>We propose the concept of the Sustainable AI Trilemma, highlighting the tensions between AI capability, digital equity, and environmental sustainability.
arXiv Detail & Related papers (2025-01-14T17:21:16Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Learn to Code Sustainably: An Empirical Study on LLM-based Green Code
Generation [7.8273713434806345]
We evaluate the sustainability of auto-generate codes produced by generative commercial AI language models.
We compare the performance and green capacity of human-generated code and code generated by the three AI language models.
arXiv Detail & Related papers (2024-03-05T22:12:01Z) - Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation [82.85015548989223]
Pentathlon is a benchmark for holistic and realistic evaluation of model efficiency.
Pentathlon focuses on inference, which accounts for a majority of the compute in a model's lifecycle.
It incorporates a suite of metrics that target different aspects of efficiency, including latency, throughput, memory overhead, and energy consumption.
arXiv Detail & Related papers (2023-07-19T01:05:33Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.