Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- URL: http://arxiv.org/abs/2309.02065v1
- Date: Tue, 5 Sep 2023 09:07:24 GMT
- Title: Efficiency is Not Enough: A Critical Perspective of Environmentally
Sustainable AI
- Authors: Dustin Wright and Christian Igel and Gabrielle Samuel and Raghavendra
Selvan
- Abstract summary: We argue that efficiency alone is not enough to make ML as a technology environmentally sustainable.
We present and argue for systems thinking as a viable path towards improving the environmental sustainability of ML holistically.
- Score: 9.918392710009774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is currently spearheaded by machine learning
(ML) methods such as deep learning (DL) which have accelerated progress on many
tasks thought to be out of reach of AI. These ML methods can often be compute
hungry, energy intensive, and result in significant carbon emissions, a known
driver of anthropogenic climate change. Additionally, the platforms on which ML
systems run are associated with environmental impacts including and beyond
carbon emissions. The solution lionized by both industry and the ML community
to improve the environmental sustainability of ML is to increase the efficiency
with which ML systems operate in terms of both compute and energy consumption.
In this perspective, we argue that efficiency alone is not enough to make ML as
a technology environmentally sustainable. We do so by presenting three high
level discrepancies between the effect of efficiency on the environmental
sustainability of ML when considering the many variables which it interacts
with. In doing so, we comprehensively demonstrate, at multiple levels of
granularity both technical and non-technical reasons, why efficiency is not
enough to fully remedy the environmental impacts of ML. Based on this, we
present and argue for systems thinking as a viable path towards improving the
environmental sustainability of ML holistically.
Related papers
- EcoMLS: A Self-Adaptation Approach for Architecting Green ML-Enabled Systems [1.0923877073891446]
Self-adaptation techniques, recognized for their potential in energy savings within software systems, have yet to be extensively explored in Machine Learning-Enabled Systems.
This research underscores the feasibility of enhancing MLS sustainability through intelligent runtime adaptations.
arXiv Detail & Related papers (2024-04-17T14:12:47Z) - Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning [0.0]
We pioneer the first investigation into adversarial ML's carbon footprint.
We introduce the Robustness Carbon Trade-off Index (RCTI)
This novel metric, inspired by economic elasticity principles, captures the sensitivity of carbon emissions to changes in adversarial robustness.
arXiv Detail & Related papers (2024-03-27T21:02:15Z) - GAISSALabel: A tool for energy labeling of ML models [1.5899411215927992]
This paper introduces GAISSALabel, a web-based tool designed to evaluate and label the energy efficiency of Machine Learning models.
The tool's adaptability allows for customization in the proposed labeling system, ensuring its relevance in the rapidly evolving ML field.
arXiv Detail & Related papers (2024-01-30T16:31:48Z) - A Synthesis of Green Architectural Tactics for ML-Enabled Systems [9.720968127923925]
We provide a catalog of 30 green architectural tactics for ML-enabled systems.
An architectural tactic is a high-level design technique to improve software quality.
To enhance transparency and facilitate their widespread use, we make the tactics available online in easily consumable formats.
arXiv Detail & Related papers (2023-12-15T08:53:45Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Counting Carbon: A Survey of Factors Influencing the Emissions of
Machine Learning [77.62876532784759]
Machine learning (ML) requires using energy to carry out computations during the model training process.
The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source.
We present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision.
arXiv Detail & Related papers (2023-02-16T18:35:00Z) - Is TinyML Sustainable? Assessing the Environmental Impacts of Machine
Learning on Microcontrollers [11.038060631389273]
Tiny Machine Learning (TinyML) has the opportunity to help address environmental challenges through sustainable computing practices.
This article discusses the potential of these TinyML applications to address critical sustainability challenges, as well as the environmental footprint of this emerging technology.
We find that TinyML systems present opportunities to offset their carbon emissions by enabling applications that reduce the emissions of other sectors.
arXiv Detail & Related papers (2023-01-27T18:23:10Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Towards Green Automated Machine Learning: Status Quo and Future
Directions [71.86820260846369]
AutoML is being criticised for its high resource consumption.
This paper proposes Green AutoML, a paradigm to make the whole AutoML process more environmentally friendly.
arXiv Detail & Related papers (2021-11-10T18:57:27Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.