Private and Robust Contribution Evaluation in Federated Learning
- URL: http://arxiv.org/abs/2602.21721v1
- Date: Wed, 25 Feb 2026 09:27:40 GMT
- Title: Private and Robust Contribution Evaluation in Federated Learning
- Authors: Delio Jaramillo Velez, Gergely Biczok, Alexandre Graell i Amat, Johan Ostman, Balazs Pejo,
- Abstract summary: Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data.<n>Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation.<n>We introduce two marginal-difference contribution scores compatible with secure aggregation.
- Score: 44.1965210509968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-silo settings. Our scores consistently outperform existing baselines, better approximate Shapley-induced client rankings, and improve downstream model performance as well as misbehavior detection. These results demonstrate that fairness, privacy, robustness, and practical utility can be achieved jointly in federated contribution evaluation, offering a principled solution for real-world cross-silo deployments.
Related papers
- Beyond performance-wise Contribution Evaluation in Federated Learning [0.0]
Federated learning offers a privacy-friendly collaborative learning framework.<n>Its success hinges on the contributions of its participants.<n>This work investigates the issue of client contributions towards a model's trustworthiness.
arXiv Detail & Related papers (2026-02-25T23:10:13Z) - GuardFed: A Trustworthy Federated Learning Framework Against Dual-Facet Attacks [56.983319121358555]
Federated learning (FL) enables privacy-preserving collaborative model training but remains vulnerable to adversarial behaviors.<n>We introduce the Dual-Facet Attack (DFA), a novel threat model that concurrently undermines predictive accuracy and group fairness.<n>We propose GuardFed, a self-adaptive defense framework that maintains a fairness-aware reference model using a small amount of clean server data.
arXiv Detail & Related papers (2025-11-12T13:02:45Z) - Enhancing Federated Survival Analysis through Peer-Driven Client Reputation in Healthcare [1.2289361708127877]
Federated Learning holds great promise for digital health by enabling collaborative model training without compromising patient data privacy.<n>We propose a peer-driven reputation mechanism for federated healthcare that integrates decentralized peer feedback with clustering-based noise handling.<n>Applying differential privacy to client-side model updates ensures sensitive information remains protected during reputation computation.
arXiv Detail & Related papers (2025-05-22T03:49:51Z) - Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI [6.671649946926508]
We present the first unified large-scale empirical study of privacy-fairness-utility trade-offs in Federated Learning (FL)<n>We compare fairness-awares with Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Encryption (SMC)<n>We uncover unexpected interactions: DP mechanisms can negatively impact fairness, skew, and fairness-awares can inadvertently reduce privacy effectiveness.
arXiv Detail & Related papers (2025-03-20T15:31:01Z) - Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers [17.243744418309593]
We assess the privacy risks of fairness-enhanced binary classifiers via membership inference attacks (MIAs) and attribute inference attacks (AIAs)<n>We uncover a potential threat mechanism that exploits prediction discrepancies between fair and biased models, leading to advanced attack results for both MIAs and AIAs.<n>Our study exposes the under-explored privacy threats in fairness studies, advocating for thorough evaluations of potential security vulnerabilities before model deployments.
arXiv Detail & Related papers (2025-03-08T10:21:21Z) - Towards Effective Evaluations and Comparisons for LLM Unlearning Methods [97.2995389188179]
This paper seeks to refine the evaluation of machine unlearning for large language models.<n>It addresses two key challenges -- the robustness of evaluation metrics and the trade-offs between competing goals.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Enabling Privacy-preserving Model Evaluation in Federated Learning via Fully Homomorphic Encryption [1.9662978733004604]
Federated learning has become increasingly widespread due to its ability to train models collaboratively without centralizing sensitive data.<n>The evaluation phase presents significant privacy risks that have not been adequately addressed in the literature.<n>We propose a novel evaluation method that leverages fully homomorphic encryption.
arXiv Detail & Related papers (2024-03-21T14:36:55Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Incentivized Communication for Federated Bandits [67.4682056391551]
We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
arXiv Detail & Related papers (2023-09-21T00:59:20Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.