A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning
- URL: http://arxiv.org/abs/2412.17830v1
- Date: Wed, 11 Dec 2024 19:00:00 GMT
- Title: A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning
- Authors: Akshaya Jagannadharao, Nicole Beckage, Sovan Biswas, Hilary Egan, Jamil Gafur, Thijs Metsch, Dawn Nafus, Giuseppe Raffa, Charles Tripp,
- Abstract summary: This paper introduces the main considerations necessary for making sound use of energy measurement tools.
It includes the use of at-the-wall versus on-device measurements, sampling strategies and best practices, common sources of error, and proxy measures.
It concludes with a call to action for improving the state of the art of measurement methods.
- Score: 0.5224038339798622
- License:
- Abstract: Concerns about the environmental footprint of machine learning are increasing. While studies of energy use and emissions of ML models are a growing subfield, most ML researchers and developers still do not incorporate energy measurement as part of their work practices. While measuring energy is a crucial step towards reducing carbon footprint, it is also not straightforward. This paper introduces the main considerations necessary for making sound use of energy measurement tools and interpreting energy estimates, including the use of at-the-wall versus on-device measurements, sampling strategies and best practices, common sources of error, and proxy measures. It also contains practical tips and real-world scenarios that illustrate how these considerations come into play. It concludes with a call to action for improving the state of the art of measurement methods and standards for facilitating robust comparisons between diverse hardware and software environments.
Related papers
- Impact of ML Optimization Tactics on Greener Pre-Trained ML Models [46.78148962732881]
This study aims to (i) analyze image classification datasets and pre-trained models, (ii) improve inference efficiency by comparing optimized and non-optimized models, and (iii) assess the economic impact of the optimizations.
We conduct a controlled experiment to evaluate the impact of various PyTorch optimization techniques (dynamic quantization, torch.compile, local pruning, and global pruning) to 42 Hugging Face models for image classification.
Dynamic quantization demonstrates significant reductions in inference time and energy consumption, making it highly suitable for large-scale systems.
arXiv Detail & Related papers (2024-09-19T16:23:03Z) - Normalizing Energy Consumption for Hardware-Independent Evaluation [9.658615045493734]
We present a novel methodology for normalizing energy consumption across different hardware platforms.
Our approach shows that the number of reference points, the type of regression and the inclusion of computational metrics significantly influences the normalization process.
arXiv Detail & Related papers (2024-09-09T13:38:00Z) - Evaluating the Energy Consumption of Machine Learning: Systematic Literature Review and Experiments [16.62572282726245]
Monitoring, understanding, and optimizing the energy consumption of Machine Learning (ML) are various reasons why it is necessary to evaluate the energy usage of ML.
There exists no universal tool that can answer this question for all use cases, and there may even be disagreement on how to evaluate energy consumption for a specific use case.
arXiv Detail & Related papers (2024-08-27T15:08:06Z) - Computing Within Limits: An Empirical Study of Energy Consumption in ML Training and Inference [2.553456266022126]
Machine learning (ML) has seen tremendous advancements, but its environmental footprint remains a concern.
Acknowledging the growing environmental impact of ML this paper investigates Green ML.
arXiv Detail & Related papers (2024-06-20T13:59:34Z) - Enhancing Energy-Awareness in Deep Learning through Fine-Grained Energy
Measurement [11.37120215795946]
This paper introduces FECoM (Fine-grained Energy Consumption Meter), a framework for fine-grained Deep Learning energy consumption measurement.
FECoM addresses the challenges of measuring energy consumption at fine-grained level by using static instrumentation and considering various factors, including computational load stability and temperature.
arXiv Detail & Related papers (2023-08-23T17:32:06Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Counting Carbon: A Survey of Factors Influencing the Emissions of
Machine Learning [77.62876532784759]
Machine learning (ML) requires using energy to carry out computations during the model training process.
The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source.
We present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision.
arXiv Detail & Related papers (2023-02-16T18:35:00Z) - Energy Efficiency of Training Neural Network Architectures: An Empirical
Study [11.325530936177493]
The evaluation of Deep Learning models has traditionally focused on criteria such as accuracy, F1 score, and related measures.
The computations needed to train such models entail a large carbon footprint.
We study the relations between DL model architectures and their environmental impact in terms of energy consumed and CO$$ emissions produced during training.
arXiv Detail & Related papers (2023-02-02T09:20:54Z) - Guaranteed efficient energy estimation of quantum many-body Hamiltonians using ShadowGrouping [49.36226952764697]
Estimation of the energy of quantum many-body systems is a paradigmatic task in various research fields.
We aim to find the optimal strategy with single-qubit measurements that yields the highest provable accuracy given a total measurement budget.
We develop a practical, efficient estimation strategy, which we call ShadowGrouping.
arXiv Detail & Related papers (2023-01-09T14:41:07Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.