Uncertainty Minimization for Personalized Federated Semi-Supervised
Learning
- URL: http://arxiv.org/abs/2205.02438v1
- Date: Thu, 5 May 2022 04:41:27 GMT
- Title: Uncertainty Minimization for Personalized Federated Semi-Supervised
Learning
- Authors: Yanhang Shi, Siguang Chen, and Haijun Zhang
- Abstract summary: We propose a novel semi-supervised learning paradigm which allows partial-labeled or unlabeled clients to seek labeling assistance from data-related clients (helper agents)
Experiments show that our proposed method can obtain superior performance and more stable convergence than other related works with partial labeled data.
- Score: 15.123493340717303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since federated learning (FL) has been introduced as a decentralized learning
technique with privacy preservation, statistical heterogeneity of distributed
data stays the main obstacle to achieve robust performance and stable
convergence in FL applications. Model personalization methods have been studied
to overcome this problem. However, existing approaches are mainly under the
prerequisite of fully labeled data, which is unrealistic in practice due to the
requirement of expertise. The primary issue caused by partial-labeled condition
is that, clients with deficient labeled data can suffer from unfair performance
gain because they lack adequate insights of local distribution to customize the
global model. To tackle this problem, 1) we propose a novel personalized
semi-supervised learning paradigm which allows partial-labeled or unlabeled
clients to seek labeling assistance from data-related clients (helper agents),
thus to enhance their perception of local data; 2) based on this paradigm, we
design an uncertainty-based data-relation metric to ensure that selected
helpers can provide trustworthy pseudo labels instead of misleading the local
training; 3) to mitigate the network overload introduced by helper searching,
we further develop a helper selection protocol to achieve efficient
communication with negligible performance sacrifice. Experiments show that our
proposed method can obtain superior performance and more stable convergence
than other related works with partial labeled data, especially in highly
heterogeneous setting.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Stable Diffusion-based Data Augmentation for Federated Learning with Non-IID Data [9.045647166114916]
Federated Learning (FL) is a promising paradigm for decentralized and collaborative model training.
FL struggles with a significant performance reduction and poor convergence when confronted with Non-Independent and Identically Distributed (Non-IID) data distributions.
We introduce Gen-FedSD, a novel approach that harnesses the powerful capability of state-of-the-art text-to-image foundation models.
arXiv Detail & Related papers (2024-05-13T16:57:48Z) - One-Shot Federated Learning with Classifier-Guided Diffusion Models [44.604485649167216]
One-shot federated learning (OSFL) has gained attention in recent years due to its low communication cost.
In this paper, we explore the novel opportunities that diffusion models bring to OSFL and propose FedCADO.
FedCADO generates data that complies with clients' distributions and subsequently training the aggregated model on the server.
arXiv Detail & Related papers (2023-11-15T11:11:25Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Dubhe: Towards Data Unbiasedness with Homomorphic Encryption in
Federated Learning Client Selection [16.975086164684882]
Federated learning (FL) is a distributed machine learning paradigm that allows clients to collaboratively train a model over their own local data.
We mathematically demonstrate the cause of performance degradation in FL and examine the performance of FL over various datasets.
We propose a pluggable system-level client selection method named Dubhe, which allows clients to proactively participate in training, preserving their privacy with the assistance of HE.
arXiv Detail & Related papers (2021-09-08T13:00:46Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.