Federated Gradient Matching Pursuit
- URL: http://arxiv.org/abs/2302.10755v1
- Date: Mon, 20 Feb 2023 16:26:29 GMT
- Title: Federated Gradient Matching Pursuit
- Authors: Halyun Jeong, Deanna Needell, Jing Qin
- Abstract summary: Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
- Score: 17.695717854068715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional machine learning techniques require centralizing all training
data on one server or data hub. Due to the development of communication
technologies and a huge amount of decentralized data on many clients,
collaborative machine learning has become the main interest while providing
privacy-preserving frameworks. In particular, federated learning (FL) provides
such a solution to learn a shared model while keeping training data at local
clients. On the other hand, in a wide range of machine learning and signal
processing applications, the desired solution naturally has a certain structure
that can be framed as sparsity with respect to a certain dictionary. This
problem can be formulated as an optimization problem with sparsity constraints
and solving it efficiently has been one of the primary research topics in the
traditional centralized setting. In this paper, we propose a novel algorithmic
framework, federated gradient matching pursuit (FedGradMP), to solve the
sparsity constrained minimization problem in the FL setting. We also generalize
our algorithms to accommodate various practical FL scenarios when only a subset
of clients participate per round, when the local model estimation at clients
could be inexact, or when the model parameters are sparse with respect to
general dictionaries. Our theoretical analysis shows the linear convergence of
the proposed algorithms. A variety of numerical experiments are conducted to
demonstrate the great potential of the proposed framework -- fast convergence
both in communication rounds and computation time for many important scenarios
without sophisticated parameter tuning.
Related papers
- A Framework for testing Federated Learning algorithms using an edge-like environment [0.0]
Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized.
It is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation.
This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance.
In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way.
arXiv Detail & Related papers (2024-07-17T19:52:53Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FLIX: A Simple and Communication-Efficient Alternative to Local Methods
in Federated Learning [4.492444446637857]
Federated learning is an increasingly popular machine learning paradigm in which multiple nodes try to collaboratively learn.
Standard average risk minimization of supervised learning is inadequate in handling several major constraints specific to federated learning.
We introduce a new framework, FLIX, that takes into account the unique challenges brought by federated learning.
arXiv Detail & Related papers (2021-11-22T22:06:58Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.