Kernel-Level Energy-Efficient Neural Architecture Search for Tabular Dataset
- URL: http://arxiv.org/abs/2504.08359v1
- Date: Fri, 11 Apr 2025 08:48:54 GMT
- Title: Kernel-Level Energy-Efficient Neural Architecture Search for Tabular Dataset
- Authors: Hoang-Loc La, Phuong Hoai Ha,
- Abstract summary: This paper introduces an energy-efficient Neural Architecture Search (NAS) method that focuses on identifying architectures that minimize energy consumption while maintaining acceptable accuracy.<n>Remarkably, the optimal architecture suggested by this method can reduce energy consumption by up to 92% compared to architectures recommended by conventional NAS.
- Score: 0.7136205674624813
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many studies estimate energy consumption using proxy metrics like memory usage, FLOPs, and inference latency, with the assumption that reducing these metrics will also lower energy consumption in neural networks. This paper, however, takes a different approach by introducing an energy-efficient Neural Architecture Search (NAS) method that directly focuses on identifying architectures that minimize energy consumption while maintaining acceptable accuracy. Unlike previous methods that primarily target vision and language tasks, the approach proposed here specifically addresses tabular datasets. Remarkably, the optimal architecture suggested by this method can reduce energy consumption by up to 92% compared to architectures recommended by conventional NAS.
Related papers
- Reconsidering the energy efficiency of spiking neural networks [4.37952937111446]
Spiking neural networks (SNNs) are generally regarded as more energy-efficient because they do not use multiplications.
We present a comparison of the energy consumption of artificial neural networks (ANNs) and SNNs from a hardware perspective.
arXiv Detail & Related papers (2024-08-29T07:00:35Z) - Revisiting DNN Training for Intermittently-Powered Energy-Harvesting Micro-Computers [0.6721767679705013]
This study introduces and evaluates a novel training methodology tailored for Deep Neural Networks in energy-constrained environments.
We propose a dynamic dropout technique that adapts to both the architecture of the device and the variability in energy availability.
Preliminary results demonstrate that this strategy provides 6 to 22 percent accuracy improvements compared to the state of the art with less than 5 percent additional compute.
arXiv Detail & Related papers (2024-08-25T01:13:00Z) - Measuring the Energy Consumption and Efficiency of Deep Neural Networks:
An Empirical Analysis and Design Recommendations [0.49478969093606673]
BUTTER-E dataset is an augmentation to the BUTTER Empirical Deep Learning dataset.
This dataset reveals the complex relationship between dataset size, network structure, and energy use.
We propose a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy.
arXiv Detail & Related papers (2024-03-13T00:27:19Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural Architecture Search [7.178157652947453]
Energy consumption from the selection, training, and deployment of deep learning models has seen a significant uptick recently.
This work aims to facilitate the design of energy-efficient deep learning models that require less computational resources.
arXiv Detail & Related papers (2022-10-12T08:39:35Z) - Energy Consumption of Neural Networks on NVIDIA Edge Boards: an
Empirical Model [6.809944967863927]
Recently, there has been a trend of shifting the execution of deep learning inference tasks toward the edge of the network, closer to the user, to reduce latency and preserve data privacy.
In this work, we aim at profiling the energetic consumption of inference tasks for some modern edge nodes.
We have then distilled a simple, practical model that can provide an estimate of the energy consumption of a certain inference task on the considered boards.
arXiv Detail & Related papers (2022-10-04T14:12:59Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - Neural Architecture Search for Efficient Uncalibrated Deep Photometric
Stereo [105.05232615226602]
We leverage differentiable neural architecture search (NAS) strategy to find uncalibrated PS architecture automatically.
Experiments on the DiLiGenT dataset show that the automatically searched neural architectures performance compares favorably with the state-of-the-art uncalibrated PS methods.
arXiv Detail & Related papers (2021-10-11T21:22:17Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.