Federated Learning: A Signal Processing Perspective
- URL: http://arxiv.org/abs/2103.17150v1
- Date: Wed, 31 Mar 2021 15:14:39 GMT
- Title: Federated Learning: A Signal Processing Perspective
- Authors: Tomer Gafni, Nir Shlezinger, Kobi Cohen, Yonina C. Eldar, and H.
Vincent Poor
- Abstract summary: Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
- Score: 144.63726413692876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dramatic success of deep learning is largely due to the availability of
data. Data samples are often acquired on edge devices, such as smart phones,
vehicles and sensors, and in some cases cannot be shared due to privacy
considerations. Federated learning is an emerging machine learning paradigm for
training models across multiple edge devices holding local datasets, without
explicitly exchanging the data. Learning in a federated manner differs from
conventional centralized machine learning, and poses several core unique
challenges and requirements, which are closely related to classical problems
studied in the areas of signal processing and communications. Consequently,
dedicated schemes derived from these areas are expected to play an important
role in the success of federated learning and the transition of deep learning
from the domain of centralized servers to mobile edge devices. In this article,
we provide a unified systematic framework for federated learning in a manner
that encapsulates and highlights the main challenges that are natural to treat
using signal processing tools. We present a formulation for the federated
learning paradigm from a signal processing perspective, and survey a set of
candidate approaches for tackling its unique challenges. We further provide
guidelines for the design and adaptation of signal processing and communication
methods to facilitate federated learning at large scale.
Related papers
- Deep Internal Learning: Deep Learning from a Single Input [88.59966585422914]
In many cases there is value in training a network just from the input at hand.
This is particularly relevant in many signal and image processing problems where training data is scarce and diversity is large.
This survey paper aims at covering deep internal-learning techniques that have been proposed in the past few years for these two important directions.
arXiv Detail & Related papers (2023-12-12T16:48:53Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z) - IBM Federated Learning: an Enterprise Framework White Paper V0.1 [28.21579297214125]
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place.
The framework applies to both Deep Neural Networks as well as traditional'' approaches for the most common machine learning libraries.
arXiv Detail & Related papers (2020-07-22T05:32:00Z) - Federated and continual learning for classification tasks in a society
of devices [59.45414406974091]
Light Federated and Continual Consensus (LFedCon2) is a new federated and continual architecture that uses light, traditional learners.
Our method allows powerless devices (such as smartphones or robots) to learn in real time, locally, continuously, autonomously and from users.
In order to test our proposal, we have applied it in a heterogeneous community of smartphone users to solve the problem of walking recognition.
arXiv Detail & Related papers (2020-06-12T12:37:03Z) - Evaluating the Communication Efficiency in Federated Learning Algorithms [3.713348568329249]
Recently, in light of new privacy legislations in many countries, the concept of Federated Learning (FL) has been introduced.
In FL, mobile users are empowered to learn a global model by aggregating their local models, without sharing the privacy-sensitive data.
This raises the challenge of communication cost when implementing FL at large scale.
arXiv Detail & Related papers (2020-04-06T15:31:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.