Learn What You Need in Personalized Federated Learning
- URL: http://arxiv.org/abs/2401.08327v1
- Date: Tue, 16 Jan 2024 12:45:15 GMT
- Title: Learn What You Need in Personalized Federated Learning
- Authors: Kexin Lv, Rui Ye, Xiaolin Huang, Jie Yang and Siheng Chen
- Abstract summary: $textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
- Score: 53.83081622573734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning aims to address data heterogeneity across
local clients in federated learning. However, current methods blindly
incorporate either full model parameters or predefined partial parameters in
personalized federated learning. They fail to customize the collaboration
manner according to each local client's data characteristics, causing
unpleasant aggregation results. To address this essential issue, we propose
$\textit{Learn2pFed}$, a novel algorithm-unrolling-based personalized federated
learning framework, enabling each client to adaptively select which part of its
local model parameters should participate in collaborative training. The key
novelty of the proposed $\textit{Learn2pFed}$ is to optimize each local model
parameter's degree of participant in collaboration as learnable parameters via
algorithm unrolling methods. This approach brings two benefits: 1)
mathmatically determining the participation degree of local model parameters in
the federated collaboration, and 2) obtaining more stable and improved
solutions. Extensive experiments on various tasks, including regression,
forecasting, and image classification, demonstrate that $\textit{Learn2pFed}$
significantly outperforms previous personalized federated learning methods.
Related papers
- Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition [26.218506124446826]
Key strategy of Personalized Federated Learning is to decouple general knowledge (shared among clients) and client-specific knowledge.
We introduce FedDecomp, a simple but effective PFL paradigm that employs parameter decomposition additive to address this issue.
Experimental results across multiple datasets and varying degrees of data demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9%.
arXiv Detail & Related papers (2024-06-28T14:01:22Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - Cross-Silo Federated Learning Across Divergent Domains with Iterative Parameter Alignment [4.95475852994362]
Federated learning is a method for training a machine learning model across remote clients.
We reformulate the typical federated learning setup to learn N models optimized for a common objective.
We find that the technique achieves competitive results on a variety of data partitions compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-11-08T16:42:14Z) - Decentralized Personalized Online Federated Learning [13.76896613426515]
Vanilla federated learning does not support learning in an online environment, learning a personalized model on each client, and learning in a decentralized setting.
We propose a new learning setting textitDecentralized Personalized Online Federated Learning that considers all the three aspects at the same time.
We verify the effectiveness and robustness of our proposed method on three real-world item recommendation datasets and one air quality prediction dataset.
arXiv Detail & Related papers (2023-11-08T16:42:10Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.