EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural Architecture Search
- URL: http://arxiv.org/abs/2210.06015v4
- Date: Fri, 22 Mar 2024 10:59:43 GMT
- Title: EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural Architecture Search
- Authors: Pedram Bakhtiarifard, Christian Igel, Raghavendra Selvan,
- Abstract summary: Energy consumption from the selection, training, and deployment of deep learning models has seen a significant uptick recently.
This work aims to facilitate the design of energy-efficient deep learning models that require less computational resources.
- Score: 7.178157652947453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Energy consumption from the selection, training, and deployment of deep learning models has seen a significant uptick recently. This work aims to facilitate the design of energy-efficient deep learning models that require less computational resources and prioritize environmental sustainability by focusing on the energy consumption. Neural architecture search (NAS) benefits from tabular benchmarks, which evaluate NAS strategies cost-effectively through precomputed performance statistics. We advocate for including energy efficiency as an additional performance criterion in NAS. To this end, we introduce an enhanced tabular benchmark encompassing data on energy consumption for varied architectures. The benchmark, designated as EC-NAS, has been made available in an open-source format to advance research in energy-conscious NAS. EC-NAS incorporates a surrogate model to predict energy consumption, aiding in diminishing the energy expenditure of the dataset creation. Our findings emphasize the potential of EC-NAS by leveraging multi-objective optimization algorithms, revealing a balance between energy usage and accuracy. This suggests the feasibility of identifying energy-lean architectures with little or no compromise in performance.
Related papers
- Kernel-Level Energy-Efficient Neural Architecture Search for Tabular Dataset [0.7136205674624813]
This paper introduces an energy-efficient Neural Architecture Search (NAS) method that focuses on identifying architectures that minimize energy consumption while maintaining acceptable accuracy.
Remarkably, the optimal architecture suggested by this method can reduce energy consumption by up to 92% compared to architectures recommended by conventional NAS.
arXiv Detail & Related papers (2025-04-11T08:48:54Z) - THOR: A Generic Energy Estimation Approach for On-Device Training [34.57867978862375]
THOR is a generic approach for energy consumption estimation in deep neural network (DNN) training.
We conduct extensive experiments with various types of models across different real-world platforms.
The results demonstrate that THOR has effectively reduced the Mean Absolute Percentage Error (MAPE) by up to 30%.
arXiv Detail & Related papers (2025-01-27T03:29:02Z) - Just In Time Transformers [2.7350304370706797]
JITtrans is a novel transformer deep learning model that significantly improves energy consumption forecasting accuracy.
Our findings highlight the potential of advanced predictive technologies to revolutionize energy management and advance sustainable power systems.
arXiv Detail & Related papers (2024-10-22T10:33:00Z) - Revisiting DNN Training for Intermittently Powered Energy Harvesting Micro Computers [0.6721767679705013]
This study introduces and evaluates a novel training methodology tailored for Deep Neural Networks in energy-constrained environments.
We propose a dynamic dropout technique that adapts to both the architecture of the device and the variability in energy availability.
Preliminary results demonstrate that this strategy provides 6 to 22 percent accuracy improvements compared to the state of the art with less than 5 percent additional compute.
arXiv Detail & Related papers (2024-08-25T01:13:00Z) - Measuring the Energy Consumption and Efficiency of Deep Neural Networks:
An Empirical Analysis and Design Recommendations [0.49478969093606673]
BUTTER-E dataset is an augmentation to the BUTTER Empirical Deep Learning dataset.
This dataset reveals the complex relationship between dataset size, network structure, and energy use.
We propose a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy.
arXiv Detail & Related papers (2024-03-13T00:27:19Z) - Robustifying and Boosting Training-Free Neural Architecture Search [49.828875134088904]
We propose a robustifying and boosting training-free NAS (RoBoT) algorithm to develop a robust and consistently better-performing metric on diverse tasks.
Remarkably, the expected performance of our RoBoT can be theoretically guaranteed, which improves over the existing training-free NAS.
arXiv Detail & Related papers (2024-03-12T12:24:11Z) - EASRec: Elastic Architecture Search for Efficient Long-term Sequential
Recommender Systems [82.76483989905961]
Current Sequential Recommender Systems (SRSs) suffer from computational and resource inefficiencies.
We develop the Elastic Architecture Search for Efficient Long-term Sequential Recommender Systems (EASRec)
EASRec introduces data-aware gates that leverage historical information from input data batch to improve the performance of the recommendation network.
arXiv Detail & Related papers (2024-02-01T07:22:52Z) - Carbon-Efficient Neural Architecture Search [6.734886271130468]
This work presents a novel approach to neural architecture search (NAS) that aims to reduce energy costs and increase carbon efficiency during the model design process.
The proposed framework, called carbon-efficient NAS (CE-NAS), consists of NAS evaluation algorithms with different energy requirements and a multi-objective sampling strategy.
Using a recent NAS benchmark dataset and two carbon traces, our trace-driven simulations demonstrate that CE-NAS achieves better carbon and search efficiency than the three baselines.
arXiv Detail & Related papers (2023-07-09T09:03:10Z) - Energy Efficiency of Training Neural Network Architectures: An Empirical
Study [11.325530936177493]
The evaluation of Deep Learning models has traditionally focused on criteria such as accuracy, F1 score, and related measures.
The computations needed to train such models entail a large carbon footprint.
We study the relations between DL model architectures and their environmental impact in terms of energy consumed and CO$$ emissions produced during training.
arXiv Detail & Related papers (2023-02-02T09:20:54Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.