Federated Deep Equilibrium Learning: A Compact Shared Representation for
Edge Communication Efficiency
- URL: http://arxiv.org/abs/2309.15659v1
- Date: Wed, 27 Sep 2023 13:48:12 GMT
- Title: Federated Deep Equilibrium Learning: A Compact Shared Representation for
Edge Communication Efficiency
- Authors: Long Tan Le, Tuan Dung Nguyen, Tung-Anh Nguyen, Choong Seon Hong,
Nguyen H. Tran
- Abstract summary: Federated Learning (FL) is a distributed learning paradigm facilitating collaboration among nodes within an edge network.
We introduce FeDEQ, a pioneering FL framework that effectively employs deep equilibrium learning and consensus optimization.
We present a novel distributed algorithm rooted in the alternating direction method of multipliers (ADMM) consensus optimization.
- Score: 12.440580969360218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a prominent distributed learning paradigm
facilitating collaboration among nodes within an edge network to co-train a
global model without centralizing data. By shifting computation to the network
edge, FL offers robust and responsive edge-AI solutions and enhance
privacy-preservation. However, deploying deep FL models within edge
environments is often hindered by communication bottlenecks, data
heterogeneity, and memory limitations. To address these challenges jointly, we
introduce FeDEQ, a pioneering FL framework that effectively employs deep
equilibrium learning and consensus optimization to exploit a compact shared
data representation across edge nodes, allowing the derivation of personalized
models specific to each node. We delve into a unique model structure composed
of an equilibrium layer followed by traditional neural network layers. Here,
the equilibrium layer functions as a global feature representation that edge
nodes can adapt to personalize their local layers. Capitalizing on FeDEQ's
compactness and representation power, we present a novel distributed algorithm
rooted in the alternating direction method of multipliers (ADMM) consensus
optimization and theoretically establish its convergence for smooth objectives.
Experiments across various benchmarks demonstrate that FeDEQ achieves
performance comparable to state-of-the-art personalized methods while employing
models of up to 4 times smaller in communication size and 1.5 times lower
memory footprint during training.
Related papers
- Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT [8.48069043458347]
It's difficult to collect and centralize sufficient training data from various entities in Industrial Internet of Things (IIoT)
Federated learning (FL) provides a solution by enabling collaborative global model training across clients.
We propose a novel personalized FL approach, named Adversarial Federated Consensus Learning (AFedCL)
arXiv Detail & Related papers (2024-09-24T03:59:32Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Tensor Decomposition based Personalized Federated Learning [12.420951968273574]
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
Due to FL's frequent communication and average aggregation strategy, they experience challenges scaling to statistical diversity data and large-scale models.
We propose a personalized FL framework, named Decomposition based Personalized learning (TDPFed), in which we design a novel tensorized local model with tensorized linear layers and convolutional layers to reduce the communication cost.
arXiv Detail & Related papers (2022-08-27T08:09:14Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.