Towards Physical Plausibility in Neuroevolution Systems
- URL: http://arxiv.org/abs/2401.17733v1
- Date: Wed, 31 Jan 2024 10:54:34 GMT
- Title: Towards Physical Plausibility in Neuroevolution Systems
- Authors: Gabriel Cort\^es, Nuno Louren\c{c}o, Penousal Machado
- Abstract summary: The increasing usage of Artificial Intelligence (AI) models, especially Deep Neural Networks (DNNs), is increasing the power consumption during training and inference.
This work addresses the growing energy consumption problem in Machine Learning (ML)
Even a slight reduction in power usage can lead to significant energy savings, benefiting users, companies, and the environment.
- Score: 0.276240219662896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing usage of Artificial Intelligence (AI) models, especially Deep
Neural Networks (DNNs), is increasing the power consumption during training and
inference, posing environmental concerns and driving the need for more
energy-efficient algorithms and hardware solutions. This work addresses the
growing energy consumption problem in Machine Learning (ML), particularly
during the inference phase. Even a slight reduction in power usage can lead to
significant energy savings, benefiting users, companies, and the environment.
Our approach focuses on maximizing the accuracy of Artificial Neural Network
(ANN) models using a neuroevolutionary framework whilst minimizing their power
consumption. To do so, power consumption is considered in the fitness function.
We introduce a new mutation strategy that stochastically reintroduces modules
of layers, with power-efficient modules having a higher chance of being chosen.
We introduce a novel technique that allows training two separate models in a
single training step whilst promoting one of them to be more power efficient
than the other while maintaining similar accuracy. The results demonstrate a
reduction in power consumption of ANN models by up to 29.2% without a
significant decrease in predictive performance.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Mantis: Enabling Energy-Efficient Autonomous Mobile Agents with Spiking
Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer low power/energy consumption due to sparse computations and efficient online learning.
We propose a Mantis methodology to systematically employ SNNs on autonomous mobile agents.
arXiv Detail & Related papers (2022-12-24T00:00:53Z) - Precise Energy Consumption Measurements of Heterogeneous Artificial
Intelligence Workloads [0.534434568021034]
We present measurements of the energy consumption of two typical applications of deep learning models on different types of compute nodes.
One advantage of our approach is that the information on energy consumption is available to all users of the supercomputer.
arXiv Detail & Related papers (2022-12-03T21:40:55Z) - EVE: Environmental Adaptive Neural Network Models for Low-power Energy
Harvesting System [8.16411986220709]
Energy harvesting technology that harvests energy from ambient environment is a promising alternative to batteries for powering those devices.
This paper proposes EVE, an automated machine learning framework to search for desired multi-models with shared weights for energy harvesting IoT devices.
Experimental results show that the neural networks models generated by EVE is on average 2.5X faster than the baseline models without pruning and shared weights.
arXiv Detail & Related papers (2022-07-14T20:53:46Z) - Great Power, Great Responsibility: Recommendations for Reducing Energy
for Training Language Models [8.927248087602942]
We investigate techniques that can be used to reduce the energy consumption of common NLP applications.
These techniques can lead to significant reduction in energy consumption when training language models or their use for inference.
arXiv Detail & Related papers (2022-05-19T16:03:55Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z) - Compute and Energy Consumption Trends in Deep Learning Inference [67.32875669386488]
We study relevant models in the areas of computer vision and natural language processing.
For a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated.
arXiv Detail & Related papers (2021-09-12T09:40:18Z) - SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments [14.727296040550392]
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility.
We propose SpikeDyn, a framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments.
arXiv Detail & Related papers (2021-02-28T08:26:23Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.