Towards Trustworthy Federated Learning with Untrusted Participants
- URL: http://arxiv.org/abs/2505.01874v2
- Date: Wed, 04 Jun 2025 15:22:18 GMT
- Title: Towards Trustworthy Federated Learning with Untrusted Participants
- Authors: Youssef Allouah, Rachid Guerraoui, John Stephan,
- Abstract summary: This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others.<n>We propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection.<n>We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy methods.
- Score: 7.278033100480175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Resilience against malicious participants and data privacy are essential for trustworthy federated learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others. In a setting where malicious participants may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, using shared randomness between participants. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor's practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.
Related papers
- Careful Whisper: Attestation for peer-to-peer Confidential Computing networks [4.502223155420236]
TEEs enable secure data processing and sharing in peer-to-peer networks, such as vehicular ad hoc networks of autonomous vehicles.<n>A naive peer-to-peer attestation approach, where every TEE directly attests every other TEE, results in quadratic communication overhead.<n>We present Careful Whisper, a gossip-based protocol that disseminates trust efficiently, reducing complexity under ideal conditions.
arXiv Detail & Related papers (2025-07-20T02:57:34Z) - Towards Trustworthy Federated Learning [26.25193909843069]
This paper develops a comprehensive framework to address three critical trustworthy challenges in federated learning (FL)<n>To improve the system's defense against Byzantine attacks, we develop a Two-sided Norm Based Screening mechanism.<n>We also adopt a differential privacy-based scheme to prevent raw data at local clients from being inferred by curious parties.
arXiv Detail & Related papers (2025-03-05T17:25:20Z) - Secure Stateful Aggregation: A Practical Protocol with Applications in Differentially-Private Federated Learning [36.42916779389165]
DP-FTRL based approaches have already seen widespread deployment in industry.
We introduce secure stateful aggregation: a simple append-only data structure that allows for the private storage of aggregate values.
We observe that secure stateful aggregation suffices for realizing DP-FTRL-based private federated learning.
arXiv Detail & Related papers (2024-10-15T07:45:18Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - The Privacy Power of Correlated Noise in Decentralized Learning [39.48990597191246]
We propose Decor, a variant of decentralized SGD with differential privacy guarantees.
We do so under SecLDP, our new relaxation of local DP, which protects all user communications against an external eavesdropper and curious users.
arXiv Detail & Related papers (2024-05-02T06:14:56Z) - Enhancing Mutual Trustworthiness in Federated Learning for Data-Rich Smart Cities [29.951569327998133]
Federated learning is a promising collaborative and privacy-preserving machine learning approach in data-rich smart cities.
Traditional approaches, such as the random client selection technique, poses several threats to the system's integrity.
We propose a novel framework that addresses the mutual trustworthiness in federated learning by considering the trust needs of both the client and the server.
arXiv Detail & Related papers (2024-05-01T08:49:22Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - FedCL: Federated Contrastive Learning for Privacy-Preserving
Recommendation [98.5705258907774]
FedCL can exploit high-quality negative samples for effective model training with privacy well protected.
We first infer user embeddings from local user data through the local model on each client, and then perturb them with local differential privacy (LDP)
Since individual user embedding contains heavy noise due to LDP, we propose to cluster user embeddings on the server to mitigate the influence of noise.
arXiv Detail & Related papers (2022-04-21T02:37:10Z) - OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification [22.579050671255846]
This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
arXiv Detail & Related papers (2022-02-15T03:23:57Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Certifiably Adversarially Robust Detection of Out-of-Distribution Data [111.67388500330273]
We aim for certifiable worst case guarantees for OOD detection by enforcing low confidence at the OOD point.
We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible.
arXiv Detail & Related papers (2020-07-16T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.