Federated Learning with Noisy User Feedback
- URL: http://arxiv.org/abs/2205.03092v1
- Date: Fri, 6 May 2022 09:14:24 GMT
- Title: Federated Learning with Noisy User Feedback
- Authors: Rahul Sharma, Anil Ramakrishna, Ansel MacLaughlin, Anna Rumshisky,
Jimit Majmudar, Clement Chung, Salman Avestimehr, Rahul Gupta
- Abstract summary: Federated learning (FL) has emerged as a method for training ML models on edge devices using sensitive user data.
We propose a strategy for training FL models using positive and negative user feedback.
We show that our method improves substantially over a self-training baseline, achieving performance closer to models trained with full supervision.
- Score: 26.798303045807508
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine Learning (ML) systems are getting increasingly popular, and drive
more and more applications and services in our daily life. This has led to
growing concerns over user privacy, since human interaction data typically
needs to be transmitted to the cloud in order to train and improve such
systems. Federated learning (FL) has recently emerged as a method for training
ML models on edge devices using sensitive user data and is seen as a way to
mitigate concerns over data privacy. However, since ML models are most commonly
trained with label supervision, we need a way to extract labels on edge to make
FL viable. In this work, we propose a strategy for training FL models using
positive and negative user feedback. We also design a novel framework to study
different noise patterns in user feedback, and explore how well standard
noise-robust objectives can help mitigate this noise when training models in a
federated setting. We evaluate our proposed training setup through detailed
experiments on two text classification datasets and analyze the effects of
varying levels of user reliability and feedback noise on model performance. We
show that our method improves substantially over a self-training baseline,
achieving performance closer to models trained with full supervision.
Related papers
- Private Federated Learning In Real World Application -- A Case Study [15.877427073033184]
This paper presents an implementation of machine learning model training using private federated learning (PFL) on edge devices.
We introduce a novel framework that uses PFL to address the challenge of training a model using users' private data.
The framework ensures that user data remain on individual devices, with only essential model updates transmitted to a central server for aggregation with privacy guarantees.
arXiv Detail & Related papers (2025-02-06T23:38:50Z) - Federated Testing (FedTest): A New Scheme to Enhance Convergence and Mitigate Adversarial Attacks in Federating Learning [35.14491996649841]
We introduce a novel federated learning framework, which we call federated testing for federated learning (FedTest)
In FedTest, the local data of a specific user is used to train the model of that user and test the models of the other users.
Our numerical results reveal that the proposed method not only accelerates convergence rates but also diminishes the potential influence of malicious users.
arXiv Detail & Related papers (2025-01-19T21:01:13Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Window-based Model Averaging Improves Generalization in Heterogeneous
Federated Learning [29.140054600391917]
Federated Learning (FL) aims to learn a global model from distributed users while protecting their privacy.
We propose WIMA (Window-based Model Averaging), which aggregates global models from different rounds using a window-based approach.
Our experiments demonstrate the robustness of WIMA against distribution shifts and bad client sampling, resulting in smoother and more stable learning trends.
arXiv Detail & Related papers (2023-10-02T17:30:14Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.