GAISSALabel: A tool for energy labeling of ML models
- URL: http://arxiv.org/abs/2401.17150v1
- Date: Tue, 30 Jan 2024 16:31:48 GMT
- Title: GAISSALabel: A tool for energy labeling of ML models
- Authors: Pau Duran, Joel Casta\~no, Cristina G\'omez, Silverio
Mart\'inez-Fern\'andez
- Abstract summary: This paper introduces GAISSALabel, a web-based tool designed to evaluate and label the energy efficiency of Machine Learning models.
The tool's adaptability allows for customization in the proposed labeling system, ensuring its relevance in the rapidly evolving ML field.
- Score: 1.5899411215927992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: The increasing environmental impact of Information Technologies,
particularly in Machine Learning (ML), highlights the need for sustainable
practices in software engineering. The escalating complexity and energy
consumption of ML models need tools for assessing and improving their energy
efficiency. Goal: This paper introduces GAISSALabel, a web-based tool designed
to evaluate and label the energy efficiency of ML models. Method: GAISSALabel
is a technology transfer development from a former research on energy
efficiency classification of ML, consisting of a holistic tool for assessing
both the training and inference phases of ML models, considering various
metrics such as power draw, model size efficiency, CO2e emissions and more.
Results: GAISSALabel offers a labeling system for energy efficiency, akin to
labels on consumer appliances, making it accessible to ML stakeholders of
varying backgrounds. The tool's adaptability allows for customization in the
proposed labeling system, ensuring its relevance in the rapidly evolving ML
field. Conclusions: GAISSALabel represents a significant step forward in
sustainable software engineering, offering a solution for balancing
high-performance ML models with environmental impacts. The tool's effectiveness
and market relevance will be further assessed through planned evaluations using
the Technology Acceptance Model.
Related papers
- MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from μWatts to MWatts for Sustainable AI [5.50579824344998]
Machine learning (ML) technologies have led to a surge in power consumption across diverse systems.
This paper introduceserf Power, a comprehensive benchmarking methodology to evaluate the energy efficiency of ML systems at power levels ranging from microwatts to megawatts.
arXiv Detail & Related papers (2024-10-15T20:06:33Z) - Impact of ML Optimization Tactics on Greener Pre-Trained ML Models [46.78148962732881]
This study aims to (i) analyze image classification datasets and pre-trained models, (ii) improve inference efficiency by comparing optimized and non-optimized models, and (iii) assess the economic impact of the optimizations.
We conduct a controlled experiment to evaluate the impact of various PyTorch optimization techniques (dynamic quantization, torch.compile, local pruning, and global pruning) to 42 Hugging Face models for image classification.
Dynamic quantization demonstrates significant reductions in inference time and energy consumption, making it highly suitable for large-scale systems.
arXiv Detail & Related papers (2024-09-19T16:23:03Z) - Computing Within Limits: An Empirical Study of Energy Consumption in ML Training and Inference [2.553456266022126]
Machine learning (ML) has seen tremendous advancements, but its environmental footprint remains a concern.
Acknowledging the growing environmental impact of ML this paper investigates Green ML.
arXiv Detail & Related papers (2024-06-20T13:59:34Z) - EcoMLS: A Self-Adaptation Approach for Architecting Green ML-Enabled Systems [1.0923877073891446]
Self-adaptation techniques, recognized for their potential in energy savings within software systems, have yet to be extensively explored in Machine Learning-Enabled Systems.
This research underscores the feasibility of enhancing MLS sustainability through intelligent runtime adaptations.
arXiv Detail & Related papers (2024-04-17T14:12:47Z) - EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning [84.6451394629312]
We introduce EgoPlan-Bench, a benchmark to evaluate the planning abilities of MLLMs in real-world scenarios.
We show that EgoPlan-Bench poses significant challenges, highlighting a substantial scope for improvement in MLLMs to achieve human-level task planning.
We also present EgoPlan-IT, a specialized instruction-tuning dataset that effectively enhances model performance on EgoPlan-Bench.
arXiv Detail & Related papers (2023-12-11T03:35:58Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning [52.257422715393574]
We introduce a self-guided methodology for Large Language Models (LLMs) to autonomously discern and select cherry samples from open-source datasets.
Our key innovation, the Instruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to identify discrepancies between a model's expected responses and its intrinsic generation capability.
arXiv Detail & Related papers (2023-08-23T09:45:29Z) - Benchmarking Automated Machine Learning Methods for Price Forecasting
Applications [58.720142291102135]
We show the possibility of substituting manually created ML pipelines with automated machine learning (AutoML) solutions.
Based on the CRISP-DM process, we split the manual ML pipeline into a machine learning and non-machine learning part.
We show in a case study for the industrial use case of price forecasting, that domain knowledge combined with AutoML can weaken the dependence on ML experts.
arXiv Detail & Related papers (2023-04-28T10:27:38Z) - Machine Learning for a Sustainable Energy Future [8.421378169245827]
We review recent advances in machine learning-driven energy research.
We discuss and evaluate the latest advances in applying ML to the development of energy harvesting.
We offer an outlook of potential research areas in the energy field that stand to further benefit from the application of ML.
arXiv Detail & Related papers (2022-10-19T08:59:53Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.