Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- URL: http://arxiv.org/abs/2309.02065v1
- Date: Tue, 5 Sep 2023 09:07:24 GMT
- Title: Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- Authors: Dustin Wright and Christian Igel and Gabrielle Samuel and Raghavendra
Selvan
- Abstract summary: We argue that efficiency alone is not enough to make ML as a technology environmentally sustainable.
We present and argue for systems thinking as a viable path towards improving the environmental sustainability of ML holistically.
- Score: 9.918392710009774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is currently spearheaded by machine learning
(ML) methods such as deep learning (DL) which have accelerated progress on many
tasks thought to be out of reach of AI. These ML methods can often be compute
hungry, energy intensive, and result in significant carbon emissions, a known
driver of anthropogenic climate change. Additionally, the platforms on which ML
systems run are associated with environmental impacts including and beyond
carbon emissions. The solution lionized by both industry and the ML community
to improve the environmental sustainability of ML is to increase the efficiency
with which ML systems operate in terms of both compute and energy consumption.
In this perspective, we argue that efficiency alone is not enough to make ML as
a technology environmentally sustainable. We do so by presenting three high
level discrepancies between the effect of efficiency on the environmental
sustainability of ML when considering the many variables which it interacts
with. In doing so, we comprehensively demonstrate, at multiple levels of
granularity both technical and non-technical reasons, why efficiency is not
enough to fully remedy the environmental impacts of ML. Based on this, we
present and argue for systems thinking as a viable path towards improving the
environmental sustainability of ML holistically.
Related papers
- Optimizing Large Language Models: Metrics, Energy Efficiency, and Case Study Insights [2.1249213103048414]
The rapid adoption of large language models (LLMs) has led to significant energy consumption and carbon emissions.
This paper explores the integration of energy-efficient optimization techniques in the deployment of LLMs to address these concerns.
arXiv Detail & Related papers (2025-04-07T21:56:59Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
Much of this debate has concentrated on direct impact without addressing the significant indirect effects.
This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs [0.0]
This paper analyzes the performance of Small Language Models (SLMs) and Vision Language Models (VLMs)
To quantify the trade-off between model performance and carbon emissions, we introduce a novel metric called CEGI (Carbon Efficient Gain Index)
Our findings suggest that the marginal gains in accuracy from larger models do not justify the substantial increase in carbon emissions.
arXiv Detail & Related papers (2024-12-03T17:32:47Z) - Impact of ML Optimization Tactics on Greener Pre-Trained ML Models [46.78148962732881]
This study aims to (i) analyze image classification datasets and pre-trained models, (ii) improve inference efficiency by comparing optimized and non-optimized models, and (iii) assess the economic impact of the optimizations.
We conduct a controlled experiment to evaluate the impact of various PyTorch optimization techniques (dynamic quantization, torch.compile, local pruning, and global pruning) to 42 Hugging Face models for image classification.
Dynamic quantization demonstrates significant reductions in inference time and energy consumption, making it highly suitable for large-scale systems.
arXiv Detail & Related papers (2024-09-19T16:23:03Z) - EcoMLS: A Self-Adaptation Approach for Architecting Green ML-Enabled Systems [1.0923877073891446]
Self-adaptation techniques, recognized for their potential in energy savings within software systems, have yet to be extensively explored in Machine Learning-Enabled Systems.
This research underscores the feasibility of enhancing MLS sustainability through intelligent runtime adaptations.
arXiv Detail & Related papers (2024-04-17T14:12:47Z) - Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning [0.0]
We pioneer the first investigation into adversarial ML's carbon footprint.
We introduce the Robustness Carbon Trade-off Index (RCTI)
This novel metric, inspired by economic elasticity principles, captures the sensitivity of carbon emissions to changes in adversarial robustness.
arXiv Detail & Related papers (2024-03-27T21:02:15Z) - GAISSALabel: A tool for energy labeling of ML models [1.5899411215927992]
This paper introduces GAISSALabel, a web-based tool designed to evaluate and label the energy efficiency of Machine Learning models.
The tool's adaptability allows for customization in the proposed labeling system, ensuring its relevance in the rapidly evolving ML field.
arXiv Detail & Related papers (2024-01-30T16:31:48Z) - A Synthesis of Green Architectural Tactics for ML-Enabled Systems [9.720968127923925]
We provide a catalog of 30 green architectural tactics for ML-enabled systems.
An architectural tactic is a high-level design technique to improve software quality.
To enhance transparency and facilitate their widespread use, we make the tactics available online in easily consumable formats.
arXiv Detail & Related papers (2023-12-15T08:53:45Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation [82.85015548989223]
Pentathlon is a benchmark for holistic and realistic evaluation of model efficiency.
Pentathlon focuses on inference, which accounts for a majority of the compute in a model's lifecycle.
It incorporates a suite of metrics that target different aspects of efficiency, including latency, throughput, memory overhead, and energy consumption.
arXiv Detail & Related papers (2023-07-19T01:05:33Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Counting Carbon: A Survey of Factors Influencing the Emissions of
Machine Learning [77.62876532784759]
Machine learning (ML) requires using energy to carry out computations during the model training process.
The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source.
We present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision.
arXiv Detail & Related papers (2023-02-16T18:35:00Z) - Is TinyML Sustainable? Assessing the Environmental Impacts of Machine
Learning on Microcontrollers [11.038060631389273]
Tiny Machine Learning (TinyML) has the opportunity to help address environmental challenges through sustainable computing practices.
This article discusses the potential of these TinyML applications to address critical sustainability challenges, as well as the environmental footprint of this emerging technology.
We find that TinyML systems present opportunities to offset their carbon emissions by enabling applications that reduce the emissions of other sectors.
arXiv Detail & Related papers (2023-01-27T18:23:10Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Towards Green Automated Machine Learning: Status Quo and Future
Directions [71.86820260846369]
AutoML is being criticised for its high resource consumption.
This paper proposes Green AutoML, a paradigm to make the whole AutoML process more environmentally friendly.
arXiv Detail & Related papers (2021-11-10T18:57:27Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.