Detect \& Score: Privacy-Preserving Misbehaviour Detection and Contribution Evaluation in Federated Learning
- URL: http://arxiv.org/abs/2506.23583v1
- Date: Mon, 30 Jun 2025 07:40:18 GMT
- Title: Detect \& Score: Privacy-Preserving Misbehaviour Detection and Contribution Evaluation in Federated Learning
- Authors: Marvin Xhemrishi, Alexandre Graell i Amat, Balázs Pejó,
- Abstract summary: Federated learning with secure aggregation enables private and collaborative learning from decentralised data without leaking sensitive client information.<n>QI and FedGT were proposed for contribution evaluation (CE) and misbehaviour detection (MD), respectively.<n>We combine the strengths of QI and FedGT to achieve both robust MD and accurate CE.
- Score: 57.35282510032077
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning with secure aggregation enables private and collaborative learning from decentralised data without leaking sensitive client information. However, secure aggregation also complicates the detection of malicious client behaviour and the evaluation of individual client contributions to the learning. To address these challenges, QI (Pejo et al.) and FedGT (Xhemrishi et al.) were proposed for contribution evaluation (CE) and misbehaviour detection (MD), respectively. QI, however, lacks adequate MD accuracy due to its reliance on the random selection of clients in each training round, while FedGT lacks the CE ability. In this work, we combine the strengths of QI and FedGT to achieve both robust MD and accurate CE. Our experiments demonstrate superior performance compared to using either method independently.
Related papers
- FedVCK: Non-IID Robust and Communication-Efficient Federated Learning via Valuable Condensed Knowledge for Medical Image Analysis [27.843757290938925]
We propose a novel federated learning method: textbfFederated learning via textbfValuable textbfCondensed textbfKnowledge (FedVCK)<n>We enhance the quality of condensed knowledge and select the most necessary knowledge guided by models, to tackle the non-IID problem within limited communication budgets effectively.
arXiv Detail & Related papers (2024-12-24T17:20:43Z) - Provably Unlearnable Data Examples [27.24152626809928]
Efforts have been undertaken to render shared data unlearnable for unauthorized models in the wild.
We propose a mechanism for certifying the so-called $(q, eta)$-Learnability of an unlearnable dataset.
A lower certified $(q, eta)$-Learnability indicates a more robust and effective protection over the dataset.
arXiv Detail & Related papers (2024-05-06T09:48:47Z) - TrustFed: A Reliable Federated Learning Framework with Malicious-Attack
Resistance [8.924352407824566]
Federated learning (FL) enables collaborative learning among multiple clients while ensuring individual data privacy.
In this paper, we propose a hierarchical audit-based FL (HiAudit-FL) framework to enhance the reliability and security of the learning process.
Our simulation results demonstrate that HiAudit-FL can effectively identify and handle potential malicious users accurately, with small system overhead.
arXiv Detail & Related papers (2023-12-06T13:56:45Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Faithful Knowledge Distillation [75.59907631395849]
We focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples?
These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting.
arXiv Detail & Related papers (2023-06-07T13:41:55Z) - FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation [69.75513501757628]
FedGT is a novel framework for identifying malicious clients in federated learning with secure aggregation.
We show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al.
arXiv Detail & Related papers (2023-05-09T14:54:59Z) - Federated Uncertainty-Aware Aggregation for Fundus Diabetic Retinopathy
Staging [42.883182872565044]
We propose a novel federated uncertainty-aware aggregation paradigm (FedUAA) for training diabetic retinopathy (DR) staging models.
FedUAA considers the reliability of each client and produces a confidence estimation for the DR staging.
Our experimental results demonstrate that our FedUAA achieves better DR staging performance with higher reliability compared to other federated learning methods.
arXiv Detail & Related papers (2023-03-23T04:41:44Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Quality Inference in Federated Learning with Secure Aggregation [0.7614628596146599]
We show that quality information could be inferred and attributed to specific participants even when secure aggregation is applied.
We apply the inferred quality information to detect misbehaviours, to stabilize training performance, and to measure the individual contributions of participants.
arXiv Detail & Related papers (2020-07-13T08:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.