Energy-Harvesting Distributed Machine Learning
- URL: http://arxiv.org/abs/2102.05639v1
- Date: Wed, 10 Feb 2021 18:53:51 GMT
- Title: Energy-Harvesting Distributed Machine Learning
- Authors: Basak Guler, Aylin Yener
- Abstract summary: We consider a distributed learning setup in which a machine learning model is trained over a large number of devices that can harvest energy from the ambient environment.
Our framework is scalable, requires only local estimation of the energy statistics, and can be applied to a wide range of distributed training settings.
- Score: 38.5300206965018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper provides a first study of utilizing energy harvesting for
sustainable machine learning in distributed networks. We consider a distributed
learning setup in which a machine learning model is trained over a large number
of devices that can harvest energy from the ambient environment, and develop a
practical learning framework with theoretical convergence guarantees. We
demonstrate through numerical experiments that the proposed framework can
significantly outperform energy-agnostic benchmarks. Our framework is scalable,
requires only local estimation of the energy statistics, and can be applied to
a wide range of distributed training settings, including machine learning in
wireless networks, edge computing, and mobile internet of things.
Related papers
- From Computation to Consumption: Exploring the Compute-Energy Link for Training and Testing Neural Networks for SED Systems [9.658615045493734]
We study several neural network architectures that are key components of sound event detection systems.
We measure the energy consumption for training and testing small to large architectures.
We establish complex relationships between the energy consumption, the number of floating-point operations, the number of parameters, and the GPU/memory utilization.
arXiv Detail & Related papers (2024-09-08T12:51:34Z) - Machine Learning for QoS Prediction in Vehicular Communication:
Challenges and Solution Approaches [46.52224306624461]
We consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications.
We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data.
We use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed.
arXiv Detail & Related papers (2023-02-23T12:29:20Z) - Evaluating Distribution System Reliability with Hyperstructures Graph
Convolutional Nets [74.51865676466056]
We show how graph convolutional networks and hyperstructures representation learning framework can be employed for accurate, reliable, and computationally efficient distribution grid planning.
Our numerical experiments show that the proposed Hyper-GCNNs approach yields substantial gains in computational efficiency.
arXiv Detail & Related papers (2022-11-14T01:29:09Z) - Automating In-Network Machine Learning [2.857025628729502]
Planter is an open-source framework for mapping trained machine learning models to programmable devices.
We show that Planter-based in-network machine learning algorithms can run at line rate, have a negligible effect on latency, coexist with standard switching functionality, and have no or minor accuracy trade-offs.
arXiv Detail & Related papers (2022-05-18T09:42:22Z) - Flashlight: Enabling Innovation in Tools for Machine Learning [50.63188263773778]
We introduce Flashlight, an open-source library built to spur innovation in machine learning tools and systems.
We see Flashlight as a tool enabling research that can benefit widely used libraries downstream and bring machine learning and systems researchers closer together.
arXiv Detail & Related papers (2022-01-29T01:03:29Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Sustainable Federated Learning [38.5300206965018]
We introduce sustainable machine learning in federated learning settings, using rechargeable devices that can collect energy from the ambient environment.
We propose a practical federated learning framework that leverages intermittent energy arrivals for training, with provable convergence guarantees.
arXiv Detail & Related papers (2021-02-22T18:58:47Z) - Plasticity-Enhanced Domain-Wall MTJ Neural Networks for Energy-Efficient
Online Learning [9.481629586734497]
We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ)
We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules.
Our energy analysis confirms the value of the approach, as the learning budget stays below 20 $mu J$ even for large tasks used typically in machine learning.
arXiv Detail & Related papers (2020-03-04T22:45:59Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.