Resource-Constrained On-Device Learning by Dynamic Averaging
- URL: http://arxiv.org/abs/2009.12098v1
- Date: Fri, 25 Sep 2020 09:29:10 GMT
- Title: Resource-Constrained On-Device Learning by Dynamic Averaging
- Authors: Lukas Heppe and Michael Kamp and Linara Adilova and Danny Heinrich and
Nico Piatkowski and Katharina Morik
- Abstract summary: Communication between data-generating devices is partially responsible for a growing portion of the world's power consumption.
For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially.
This paper investigates an approach to communication-efficient on-device learning of integer exponential families executed on low-power processors.
- Score: 7.720999661966942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The communication between data-generating devices is partially responsible
for a growing portion of the world's power consumption. Thus reducing
communication is vital, both, from an economical and an ecological perspective.
For machine learning, on-device learning avoids sending raw data, which can
reduce communication substantially. Furthermore, not centralizing the data
protects privacy-sensitive data. However, most learning algorithms require
hardware with high computation power and thus high energy consumption. In
contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow
for energy-efficient learning of local models. Combined with
communication-efficient distributed learning strategies, this reduces the
overall energy consumption and enables applications that were yet impossible
due to limited energy on local devices. The major challenge is then, that the
low-power processors typically only have integer processing capabilities. This
paper investigates an approach to communication-efficient on-device learning of
integer exponential families that can be executed on low-power processors, is
privacy-preserving, and effectively minimizes communication. The empirical
evaluation shows that the approach can reach a model quality comparable to a
centrally learned regular model with an order of magnitude less communication.
Comparing the overall energy consumption, this reduces the required energy for
solving the machine learning task by a significant amount.
Related papers
- Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Decentralized federated learning methods for reducing communication cost
and energy consumption in UAV networks [8.21384946488751]
Unmanned aerial vehicles (UAV) play many roles in a modern smart city such as the delivery of goods, mapping real-time road traffic and monitoring pollution.
Traditional machine learning models for drones encounter data privacy problems, communication costs and energy limitations.
We propose two aggregation methods: Commutative FL and Alternate FL, based on the existing architecture of decentralised Federated Learning for UAV Networks (DFL-UN)
arXiv Detail & Related papers (2023-04-13T14:00:34Z) - Great Power, Great Responsibility: Recommendations for Reducing Energy
for Training Language Models [8.927248087602942]
We investigate techniques that can be used to reduce the energy consumption of common NLP applications.
These techniques can lead to significant reduction in energy consumption when training language models or their use for inference.
arXiv Detail & Related papers (2022-05-19T16:03:55Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - A Framework for Energy and Carbon Footprint Analysis of Distributed and
Federated Edge Learning [48.63610479916003]
This article breaks down and analyzes the main factors that influence the environmental footprint of distributed learning policies.
It models both vanilla and decentralized FL policies driven by consensus.
Results show that FL allows remarkable end-to-end energy savings (30%-40%) for wireless systems characterized by low bit/Joule efficiency.
arXiv Detail & Related papers (2021-03-18T16:04:42Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.