FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning
- URL: http://arxiv.org/abs/2205.01310v1
- Date: Tue, 3 May 2022 05:09:08 GMT
- Title: FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning
- Authors: SangMook Kim, Wonyoung Shin, Soohyuk Jang, Hwanjun Song, Se-Young Yun
- Abstract summary: FedRN exploits k-reliable neighbors with high data expertise or similarity.
Compared with existing robust training methods, the results show that FedRN significantly improves the test accuracy in the presence of noisy labels.
- Score: 15.101940747707701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness is becoming another important challenge of federated learning in
that the data collection process in each client is naturally accompanied by
noisy labels. However, it is far more complex and challenging owing to varying
levels of data heterogeneity and noise over clients, which exacerbates the
client-to-client performance discrepancy. In this work, we propose a robust
federated learning method called FedRN, which exploits k-reliable neighbors
with high data expertise or similarity. Our method helps mitigate the gap
between low- and high-performance clients by training only with a selected set
of clean examples, identified by their ensembled mixture models. We demonstrate
the superiority of FedRN via extensive evaluations on three real-world or
synthetic benchmark datasets. Compared with existing robust training methods,
the results show that FedRN significantly improves the test accuracy in the
presence of noisy labels.
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Federated Learning with Extremely Noisy Clients via Negative
Distillation [70.13920804879312]
Federated learning (FL) has shown remarkable success in cooperatively training deep models, while struggling with noisy labels.
We propose a novel approach, called negative distillation (FedNed) to leverage models trained on noisy clients.
FedNed first identifies noisy clients and employs rather than discards the noisy clients in a knowledge distillation manner.
arXiv Detail & Related papers (2023-12-20T01:59:48Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Learning Cautiously in Federated Learning with Noisy and Heterogeneous
Clients [4.782145666637457]
Federated learning (FL) is a distributed framework for collaboratively training with privacy guarantees.
In real-world scenarios, clients may have Non-IID data (local class imbalance) with poor annotation quality (label noise)
We propose FedCNI without using an additional clean proxy dataset.
It includes a noise-resilient local solver and a robust global aggregator.
arXiv Detail & Related papers (2023-04-06T06:47:14Z) - Label Noise-Robust Learning using a Confidence-Based Sieving Strategy [15.997774467236352]
In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge.
Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.
We propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to differentiate between the clean and noisy samples effectively.
arXiv Detail & Related papers (2022-10-11T10:47:28Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with
Noisy Labels [49.47228898303909]
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
Local training on noisy labels can easily result in overfitting to noisy labels, which is devastating to the global model through aggregation.
We develop a simple two-level sampling method "FedNoiL" that selects clients for more robust global aggregation on the server.
arXiv Detail & Related papers (2022-05-20T12:06:39Z) - Semi-supervised Long-tailed Recognition using Alternate Sampling [95.93760490301395]
Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes.
We propose a new recognition setting, namely semi-supervised long-tailed recognition.
We demonstrate significant accuracy improvements over other competitive methods on two datasets.
arXiv Detail & Related papers (2021-05-01T00:43:38Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.