Carbon-Efficient Neural Architecture Search
- URL: http://arxiv.org/abs/2307.04131v1
- Date: Sun, 9 Jul 2023 09:03:10 GMT
- Title: Carbon-Efficient Neural Architecture Search
- Authors: Yiyang Zhao and Tian Guo
- Abstract summary: This work presents a novel approach to neural architecture search (NAS) that aims to reduce energy costs and increase carbon efficiency during the model design process.
The proposed framework, called carbon-efficient NAS (CE-NAS), consists of NAS evaluation algorithms with different energy requirements and a multi-objective sampling strategy.
Using a recent NAS benchmark dataset and two carbon traces, our trace-driven simulations demonstrate that CE-NAS achieves better carbon and search efficiency than the three baselines.
- Score: 6.734886271130468
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This work presents a novel approach to neural architecture search (NAS) that
aims to reduce energy costs and increase carbon efficiency during the model
design process. The proposed framework, called carbon-efficient NAS (CE-NAS),
consists of NAS evaluation algorithms with different energy requirements, a
multi-objective optimizer, and a heuristic GPU allocation strategy. CE-NAS
dynamically balances energy-efficient sampling and energy-consuming evaluation
tasks based on current carbon emissions. Using a recent NAS benchmark dataset
and two carbon traces, our trace-driven simulations demonstrate that CE-NAS
achieves better carbon and search efficiency than the three baselines.
Related papers
- Towards Sustainable Large Language Model Serving [3.085867867565808]
We study LLMs from a carbon emission perspective, addressing both operational and embodied emissions.
We characterize the performance and energy of LLaMA with 1B, 3B, and 7B parameters using two Nvidia GPU types.
We analytically model operational carbon emissions based on energy consumption and carbon intensities from three grid regions.
arXiv Detail & Related papers (2024-12-31T03:18:10Z) - ILASH: A Predictive Neural Architecture Search Framework for Multi-Task Applications [2.141170708560114]
We propose a new paradigm of neural network architecture (ILASH) that leverages a layer sharing concept for minimizing power utilization, increasing frame rate, and reducing model size.
We also propose a novel neural network architecture search framework (ILASH-NAS) for efficient construction of these neural network models for a given set of tasks and device constraints.
We observe significant improvement in terms of both the generated model performance and neural search efficiency with up to 16x less energy utilization, CO2 emission, and training/search time.
arXiv Detail & Related papers (2024-12-03T03:12:16Z) - CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework [8.301481000995757]
This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process.
The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of different NAS algorithms.
We demonstrate the efficacy of CE-NAS in lowering carbon emissions while achieving SOTA results for both NAS and open-domain NAS tasks.
arXiv Detail & Related papers (2024-06-03T15:13:21Z) - Generative AI for Low-Carbon Artificial Intelligence of Things with Large Language Models [67.0243099823109]
Generative AI (GAI) holds immense potential to reduce carbon emissions of Artificial Intelligence of Things (AIoT)
In this article, we explore the potential of GAI for carbon emissions reduction and propose a novel GAI-enabled solution for low-carbon AIoT.
We propose a Large Language Model (LLM)-enabled carbon emission optimization framework, in which we design pluggable LLM and Retrieval Augmented Generation (RAG) modules.
arXiv Detail & Related papers (2024-04-28T05:46:28Z) - EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural Architecture Search [7.178157652947453]
Energy consumption from the selection, training, and deployment of deep learning models has seen a significant uptick recently.
This work aims to facilitate the design of energy-efficient deep learning models that require less computational resources.
arXiv Detail & Related papers (2022-10-12T08:39:35Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - Effective, Efficient and Robust Neural Architecture Search [4.273005643715522]
Recent advances in adversarial attacks show the vulnerability of deep neural networks searched by Neural Architecture Search (NAS)
We propose an Effective, Efficient, and Robust Neural Architecture Search (E2RNAS) method to search a neural network architecture by taking the performance, robustness, and resource constraint into consideration.
Experiments on benchmark datasets show that the proposed E2RNAS method can find adversarially robust architectures with optimized model size and comparable classification accuracy.
arXiv Detail & Related papers (2020-11-19T13:46:23Z) - Neural Network Design: Learning from Neural Architecture Search [3.9430294028981763]
Neural Architecture Search (NAS) aims to optimize deep neural networks' architecture for better accuracy or smaller computational cost.
Despite various successful approaches proposed to solve the NAS task, the landscape of it, along with its properties, are rarely investigated.
arXiv Detail & Related papers (2020-11-01T15:02:02Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.