Personalized Federated Learning via Maximizing Correlation with Sparse
and Hierarchical Extensions
- URL: http://arxiv.org/abs/2107.05330v1
- Date: Mon, 12 Jul 2021 11:43:40 GMT
- Title: Personalized Federated Learning via Maximizing Correlation with Sparse
and Hierarchical Extensions
- Authors: YinchuanLi, XiaofengLiu, XuZhang, YunfengShao, QingWang and YanhuiGeng
- Abstract summary: Federated Learning (FL) is a collaborative machine learning technique to train a global model without obtaining clients' private data.
We propose a novel personalized federated learning via maximizing correlation pFedMac.
We show that pFedMac performs better than the L2-norm distance based personalization methods.
- Score: 14.862798952297105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a collaborative machine learning technique to
train a global model without obtaining clients' private data. The main
challenges in FL are statistical diversity among clients, limited computing
capability among client equipments and the excessive communication overhead and
long latency between server and clients. To address these problems,
we propose a novel personalized federated learning via maximizing correlation
pFedMac), and further extend it to sparse and hierarchical models. By
minimizing loss functions including the properties of an approximated L1-norm
and the hierarchical correlation, the performance on statistical diversity data
is improved and the communicational and computational loads required in the
network are reduced. Theoretical proofs show that pFedMac performs better than
the L2-norm distance based personalization methods. Experimentally, we
demonstrate the benefits of this sparse hierarchical personalization
architecture compared with the state-of-the-art personalization methods and
their extensions (e.g. pFedMac achieves 99.75% accuracy on MNIST and 87.27%
accuracy on Synthetic under heterogeneous and non-i.i.d data distributions)
Related papers
- FedLog: Personalized Federated Classification with Less Communication and More Flexibility [24.030147353437382]
Federated representation learning (FRL) aims to learn personalized federated models with effective feature extraction from local data.
To reduce the overhead, we propose to share sufficient data summaries instead of raw model parameters.
arXiv Detail & Related papers (2024-07-11T09:40:29Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - FedCLIP: Fast Generalization and Personalization for CLIP in Federated
Learning [18.763298147996238]
Federated learning (FL) has emerged as a new paradigm for privacy-preserving computation in recent years.
FL faces two critical challenges that hinder its actual performance: data distribution Heterogeneous and high resource costs.
We propose FedCLIP to achieve fast generalization and personalization for CLIP in FL.
arXiv Detail & Related papers (2023-02-27T02:49:06Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.