EVE: Environmental Adaptive Neural Network Models for Low-power Energy
Harvesting System
- URL: http://arxiv.org/abs/2207.09258v1
- Date: Thu, 14 Jul 2022 20:53:46 GMT
- Title: EVE: Environmental Adaptive Neural Network Models for Low-power Energy
Harvesting System
- Authors: Sahidul Islam, Shanglin Zhou, Ran Ran, Yufang Jin, Wujie Wen, Caiwen
Ding and Mimi Xie
- Abstract summary: Energy harvesting technology that harvests energy from ambient environment is a promising alternative to batteries for powering those devices.
This paper proposes EVE, an automated machine learning framework to search for desired multi-models with shared weights for energy harvesting IoT devices.
Experimental results show that the neural networks models generated by EVE is on average 2.5X faster than the baseline models without pruning and shared weights.
- Score: 8.16411986220709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: IoT devices are increasingly being implemented with neural network models to
enable smart applications. Energy harvesting (EH) technology that harvests
energy from ambient environment is a promising alternative to batteries for
powering those devices due to the low maintenance cost and wide availability of
the energy sources. However, the power provided by the energy harvester is low
and has an intrinsic drawback of instability since it varies with the ambient
environment. This paper proposes EVE, an automated machine learning (autoML)
co-exploration framework to search for desired multi-models with shared weights
for energy harvesting IoT devices. Those shared models incur significantly
reduced memory footprint with different levels of model sparsity, latency, and
accuracy to adapt to the environmental changes. An efficient on-device
implementation architecture is further developed to efficiently execute each
model on device. A run-time model extraction algorithm is proposed that
retrieves individual model with negligible overhead when a specific model mode
is triggered. Experimental results show that the neural networks models
generated by EVE is on average 2.5X times faster than the baseline models
without pruning and shared weights.
Related papers
- Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices [0.0]
We evaluate state-of-the-art object detection models, including YOLOv8 (Nano, Small, Medium), EfficientDet Lite (Lite0, Lite1, Lite2), and SSD (SSD MobileNet V1, SSDLite MobileDet)
We deployed these models on popular edge devices like the Raspberry Pi 3, 4, and 5 with/without TPU accelerators, and Jetson Orin Nano, collecting key performance metrics such as energy consumption, inference time, and Mean Average Precision (mAP)
Our findings highlight that lower mAP models such as SSD MobileNet V1 are more energy-efficient and faster in
arXiv Detail & Related papers (2024-09-25T10:56:49Z) - E-QUARTIC: Energy Efficient Edge Ensemble of Convolutional Neural Networks for Resource-Optimized Learning [9.957458251671486]
Ensembling models like Convolutional Neural Networks (CNNs) result in high memory and computing overhead, preventing their deployment in embedded systems.
We propose E-QUARTIC, a novel Energy Efficient Edge Ensembling framework to build ensembles of CNNs targeting Artificial Intelligence (AI)-based embedded systems.
arXiv Detail & Related papers (2024-09-12T19:30:22Z) - Benchmarking Deep Learning Models on NVIDIA Jetson Nano for Real-Time Systems: An Empirical Investigation [2.3636539018632616]
This work empirically investigates the optimization of complex deep learning models to analyze their functionality on an embedded device.
It evaluates the effectiveness of the optimized models in terms of their inference speed for image classification and video action detection.
arXiv Detail & Related papers (2024-06-25T17:34:52Z) - Towards Physical Plausibility in Neuroevolution Systems [0.276240219662896]
The increasing usage of Artificial Intelligence (AI) models, especially Deep Neural Networks (DNNs), is increasing the power consumption during training and inference.
This work addresses the growing energy consumption problem in Machine Learning (ML)
Even a slight reduction in power usage can lead to significant energy savings, benefiting users, companies, and the environment.
arXiv Detail & Related papers (2024-01-31T10:54:34Z) - Deep Convolutional Neural Networks for Short-Term Multi-Energy Demand
Prediction of Integrated Energy Systems [0.0]
This paper develops six novel prediction models based on Convolutional Neural Networks (CNNs) for forecasting multi-energy power consumptions.
The models are applied in a comprehensive manner on a novel integrated electrical, heat and gas network system.
arXiv Detail & Related papers (2023-12-24T14:56:23Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - Learning, Computing, and Trustworthiness in Intelligent IoT
Environments: Performance-Energy Tradeoffs [62.91362897985057]
An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications.
This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption.
arXiv Detail & Related papers (2021-10-04T19:41:42Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z) - REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild [62.36144064259933]
We propose REST, a new method that simultaneously tackles both issues via adversarial training and controlling the Lipschitz constant of the neural network.
We demonstrate that REST produces highly-robust and efficient models that substantially outperform the original full-sized models in the presence of noise.
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
arXiv Detail & Related papers (2020-01-29T17:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.