Certifiably Byzantine-Robust Federated Conformal Prediction
- URL: http://arxiv.org/abs/2406.01960v1
- Date: Tue, 4 Jun 2024 04:43:30 GMT
- Title: Certifiably Byzantine-Robust Federated Conformal Prediction
- Authors: Mintong Kang, Zhen Lin, Jimeng Sun, Cao Xiao, Bo Li,
- Abstract summary: We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
- Score: 49.23374238798428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformal prediction has shown impressive capacity in constructing statistically rigorous prediction sets for machine learning models with exchangeable data samples. The siloed datasets, coupled with the escalating privacy concerns related to local data sharing, have inspired recent innovations extending conformal prediction into federated environments with distributed data samples. However, this framework for distributed uncertainty quantification is susceptible to Byzantine failures. A minor subset of malicious clients can significantly compromise the practicality of coverage guarantees. To address this vulnerability, we introduce a novel framework Rob-FCP, which executes robust federated conformal prediction, effectively countering malicious clients capable of reporting arbitrary statistics with the conformal calibration process. We theoretically provide the conformal coverage bound of Rob-FCP in the Byzantine setting and show that the coverage of Rob-FCP is asymptotically close to the desired coverage level. We also propose a malicious client number estimator to tackle a more challenging setting where the number of malicious clients is unknown to the defender and theoretically shows its effectiveness. We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks on five standard benchmark and real-world healthcare datasets.
Related papers
- Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.
We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective [65.65471972217814]
federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively.
FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL.
In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit.
We propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability.
arXiv Detail & Related papers (2025-01-06T15:19:26Z) - Robust Federated Learning in the Face of Covariate Shift: A Magnitude Pruning with Hybrid Regularization Framework for Enhanced Model Aggregation [1.519321208145928]
Federated Learning (FL) offers a promising framework for individuals aiming to collaboratively develop a shared model.
variations in data distribution among clients can profoundly affect FL methodologies, primarily due to instabilities in the aggregation process.
We propose a novel FL framework, combining individual parameter pruning and regularization techniques to improve the robustness of individual clients' models to aggregate.
arXiv Detail & Related papers (2024-12-19T16:22:37Z) - FedCAP: Robust Federated Learning via Customized Aggregation and Personalization [13.17735010891312]
Federated learning (FL) has been applied to various privacy-preserving scenarios.
We propose FedCAP, a robust FL framework against both data heterogeneity and Byzantine attacks.
We show that FedCAP performs well in several non-IID settings and shows strong robustness under a series of poisoning attacks.
arXiv Detail & Related papers (2024-10-16T23:01:22Z) - COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits [21.140271657387903]
Conformal prediction has shown spurring performance in constructing statistically rigorous prediction sets for arbitrary black-box machine learning models.
We propose a certifiably robust learning-reasoning conformal prediction framework (COLEP) via probabilistic circuits.
We show that COLEP achieves 12% up to improvement in certified coverage on GTSRB, 9% on CIFAR-10, and 14% on AwA2.
arXiv Detail & Related papers (2024-03-17T21:23:45Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Performance Weighting for Robust Federated Learning Against Corrupted
Sources [1.76179873429447]
Federated learning has emerged as a dominant computational paradigm for distributed machine learning.
In real-world applications, a federated environment may consist of a mixture of benevolent and malicious clients.
We show that the standard global aggregation scheme of local weights is inefficient in the presence of corrupted clients.
arXiv Detail & Related papers (2022-05-02T20:01:44Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.