Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- URL: http://arxiv.org/abs/2309.02065v1
- Date: Tue, 5 Sep 2023 09:07:24 GMT
- Title: Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- Authors: Dustin Wright and Christian Igel and Gabrielle Samuel and Raghavendra
Selvan
- Abstract summary: We argue that efficiency alone is not enough to make ML as a technology environmentally sustainable.
We present and argue for systems thinking as a viable path towards improving the environmental sustainability of ML holistically.
- Score: 9.918392710009774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is currently spearheaded by machine learning
(ML) methods such as deep learning (DL) which have accelerated progress on many
tasks thought to be out of reach of AI. These ML methods can often be compute
hungry, energy intensive, and result in significant carbon emissions, a known
driver of anthropogenic climate change. Additionally, the platforms on which ML
systems run are associated with environmental impacts including and beyond
carbon emissions. The solution lionized by both industry and the ML community
to improve the environmental sustainability of ML is to increase the efficiency
with which ML systems operate in terms of both compute and energy consumption.
In this perspective, we argue that efficiency alone is not enough to make ML as
a technology environmentally sustainable. We do so by presenting three high
level discrepancies between the effect of efficiency on the environmental
sustainability of ML when considering the many variables which it interacts
with. In doing so, we comprehensively demonstrate, at multiple levels of
granularity both technical and non-technical reasons, why efficiency is not
enough to fully remedy the environmental impacts of ML. Based on this, we
present and argue for systems thinking as a viable path towards improving the
environmental sustainability of ML holistically.
Related papers
- From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
Much of this debate has concentrated on direct impact without addressing the significant indirect effects.
This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - The Dual-use Dilemma in LLMs: Do Empowering Ethical Capacities Make a Degraded Utility? [54.18519360412294]
Large Language Models (LLMs) must balance between rejecting harmful requests for safety and accommodating legitimate ones for utility.
This paper presents a Direct Preference Optimization (DPO) based alignment framework that achieves better overall performance.
Our resulting model, LibraChem, outperforms leading LLMs including Claude-3, GPT-4o, and LLaMA-3 by margins of 13.44%, 7.16%, and 7.10% respectively.
arXiv Detail & Related papers (2025-01-20T06:35:01Z) - Do Developers Adopt Green Architectural Tactics for ML-Enabled Systems? A Mining Software Repository Study [10.997873336451498]
Green AI advocates for reducing computational demands while still maintaining accuracy.
This paper addresses this gap by studying 168 open-source ML projects on GitHub.
It employs a novel large language model (LLM)-based mining mechanism to identify and analyze green strategies.
arXiv Detail & Related papers (2024-10-09T09:27:07Z) - EcoMLS: A Self-Adaptation Approach for Architecting Green ML-Enabled Systems [1.0923877073891446]
Self-adaptation techniques, recognized for their potential in energy savings within software systems, have yet to be extensively explored in Machine Learning-Enabled Systems.
This research underscores the feasibility of enhancing MLS sustainability through intelligent runtime adaptations.
arXiv Detail & Related papers (2024-04-17T14:12:47Z) - Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning [0.0]
We pioneer the first investigation into adversarial ML's carbon footprint.
We introduce the Robustness Carbon Trade-off Index (RCTI)
This novel metric, inspired by economic elasticity principles, captures the sensitivity of carbon emissions to changes in adversarial robustness.
arXiv Detail & Related papers (2024-03-27T21:02:15Z) - A Synthesis of Green Architectural Tactics for ML-Enabled Systems [9.720968127923925]
We provide a catalog of 30 green architectural tactics for ML-enabled systems.
An architectural tactic is a high-level design technique to improve software quality.
To enhance transparency and facilitate their widespread use, we make the tactics available online in easily consumable formats.
arXiv Detail & Related papers (2023-12-15T08:53:45Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Counting Carbon: A Survey of Factors Influencing the Emissions of
Machine Learning [77.62876532784759]
Machine learning (ML) requires using energy to carry out computations during the model training process.
The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source.
We present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision.
arXiv Detail & Related papers (2023-02-16T18:35:00Z) - Is TinyML Sustainable? Assessing the Environmental Impacts of Machine
Learning on Microcontrollers [11.038060631389273]
Tiny Machine Learning (TinyML) has the opportunity to help address environmental challenges through sustainable computing practices.
This article discusses the potential of these TinyML applications to address critical sustainability challenges, as well as the environmental footprint of this emerging technology.
We find that TinyML systems present opportunities to offset their carbon emissions by enabling applications that reduce the emissions of other sectors.
arXiv Detail & Related papers (2023-01-27T18:23:10Z) - Towards Green Automated Machine Learning: Status Quo and Future
Directions [71.86820260846369]
AutoML is being criticised for its high resource consumption.
This paper proposes Green AutoML, a paradigm to make the whole AutoML process more environmentally friendly.
arXiv Detail & Related papers (2021-11-10T18:57:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.