The Environmental Impacts of Machine Learning Training Keep Rising Evidencing Rebound Effect
- URL: http://arxiv.org/abs/2510.09022v1
- Date: Fri, 10 Oct 2025 05:49:37 GMT
- Title: The Environmental Impacts of Machine Learning Training Keep Rising Evidencing Rebound Effect
- Authors: Clément Morand, Anne-Laure Ligozat, Aurélie Névéol,
- Abstract summary: We estimate the environmental impacts associated with training notable AI systems over the last decade.<n>Our analysis reveals two critical trends: First, the impacts of graphics cards production have increased steadily over this period.<n>We show that the impacts of hardware must be considered over the entire life cycle rather than the sole use phase in order to avoid impact shifting.
- Score: 3.255583064724235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent Machine Learning (ML) approaches have shown increased performance on benchmarks but at the cost of escalating computational demands. Hardware, algorithmic and carbon optimizations have been proposed to curb energy consumption and environmental impacts. Can these strategies lead to sustainable ML model training? Here, we estimate the environmental impacts associated with training notable AI systems over the last decade, including Large Language Models, with a focus on the life cycle of graphics cards. Our analysis reveals two critical trends: First, the impacts of graphics cards production have increased steadily over this period; Second, energy consumption and environmental impacts associated with training ML models have increased exponentially, even when considering reduction strategies such as location shifting to places with less carbon intensive electricity mixes. Optimization strategies do not mitigate the impacts induced by model training, evidencing rebound effect. We show that the impacts of hardware must be considered over the entire life cycle rather than the sole use phase in order to avoid impact shifting. Our study demonstrates that increasing efficiency alone cannot ensure sustainability in ML. Mitigating the environmental impact of AI also requires reducing AI activities and questioning the scale and frequency of resource-intensive training.
Related papers
- From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.<n>We contend that a narrow focus on direct emissions misrepresents AI's true climate footprint, limiting the scope for meaningful interventions.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - How Green Can AI Be? A Study of Trends in Machine Learning Environmental Impacts [2.640490842167383]
optimisation strategies aim to reduce the energy consumption and environmental impacts associated with AI.<n>This paper investigates the evolution of individual graphics cards production impacts and of the environmental impacts associated with training Machine Learning (ML) models over time.
arXiv Detail & Related papers (2024-12-23T08:24:44Z) - Impact of ML Optimization Tactics on Greener Pre-Trained ML Models [46.78148962732881]
This study aims to (i) analyze image classification datasets and pre-trained models, (ii) improve inference efficiency by comparing optimized and non-optimized models, and (iii) assess the economic impact of the optimizations.
We conduct a controlled experiment to evaluate the impact of various PyTorch optimization techniques (dynamic quantization, torch.compile, local pruning, and global pruning) to 42 Hugging Face models for image classification.
Dynamic quantization demonstrates significant reductions in inference time and energy consumption, making it highly suitable for large-scale systems.
arXiv Detail & Related papers (2024-09-19T16:23:03Z) - Modeling of New Energy Vehicles' Impact on Urban Ecology Focusing on Behavior [0.0]
surging demand for new energy vehicles is driven by the imperative to conserve energy, reduce emissions, and enhance the ecological ambiance.
behavioral analysis and mining usage patterns of new energy vehicles can be identified.
Environmental computational modeling method has been proposed to simulate the interaction between new energy vehicles and the environment.
arXiv Detail & Related papers (2024-06-06T14:03:52Z) - Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages [56.98243487769916]
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning.
We propose Adaptive RR, which dynamically adjusts the replay ratio based on the critic's plasticity level.
arXiv Detail & Related papers (2023-10-11T12:05:34Z) - Estimating Deep Learning energy consumption based on model architecture and training environment [5.465797591588829]
We investigate how model architecture and training environment affect energy consumption.<n>We find that selecting the right model-training environment combination can reduce training energy consumption by up to 80.68%.<n>We propose the Stable Training Epoch Projection (STEP) and the Pre-training Regression-based Estimation (PRE) methods.
arXiv Detail & Related papers (2023-07-07T12:07:59Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - PLASTIC: Improving Input and Label Plasticity for Sample Efficient
Reinforcement Learning [54.409634256153154]
In Reinforcement Learning (RL), enhancing sample efficiency is crucial.
In principle, off-policy RL algorithms can improve sample efficiency by allowing multiple updates per environment interaction.
Our study investigates the underlying causes of this phenomenon by dividing plasticity into two aspects.
arXiv Detail & Related papers (2023-06-19T06:14:51Z) - Compute and Energy Consumption Trends in Deep Learning Inference [67.32875669386488]
We study relevant models in the areas of computer vision and natural language processing.
For a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated.
arXiv Detail & Related papers (2021-09-12T09:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.