Quality Inference in Federated Learning with Secure Aggregation
- URL: http://arxiv.org/abs/2007.06236v4
- Date: Thu, 25 May 2023 12:57:35 GMT
- Title: Quality Inference in Federated Learning with Secure Aggregation
- Authors: Bal\'azs Pej\'o and Gergely Bicz\'ok
- Abstract summary: We show that quality information could be inferred and attributed to specific participants even when secure aggregation is applied.
We apply the inferred quality information to detect misbehaviours, to stabilize training performance, and to measure the individual contributions of participants.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning algorithms are developed both for efficiency reasons and
to ensure the privacy and confidentiality of personal and business data,
respectively. Despite no data being shared explicitly, recent studies showed
that the mechanism could still leak sensitive information. Hence, secure
aggregation is utilized in many real-world scenarios to prevent attribution to
specific participants. In this paper, we focus on the quality of individual
training datasets and show that such quality information could be inferred and
attributed to specific participants even when secure aggregation is applied.
Specifically, through a series of image recognition experiments, we infer the
relative quality ordering of participants. Moreover, we apply the inferred
quality information to detect misbehaviours, to stabilize training performance,
and to measure the individual contributions of participants.
Related papers
- Detect \& Score: Privacy-Preserving Misbehaviour Detection and Contribution Evaluation in Federated Learning [57.35282510032077]
Federated learning with secure aggregation enables private and collaborative learning from decentralised data without leaking sensitive client information.<n>QI and FedGT were proposed for contribution evaluation (CE) and misbehaviour detection (MD), respectively.<n>We combine the strengths of QI and FedGT to achieve both robust MD and accurate CE.
arXiv Detail & Related papers (2025-06-30T07:40:18Z) - Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - Footprints of Data in a Classifier: Understanding the Privacy Risks and Solution Strategies [0.9208007322096533]
Article 17 of the General Data Protection Regulation (Right Erasure) requires data to be permanently removed from a system to prevent potential compromise.
One such issue arises from the residual footprints of training data embedded within predictive models.
This study examines how two fundamental aspects of classifier systems - training quality and classifier training methodology - contribute to privacy vulnerabilities.
arXiv Detail & Related papers (2024-07-02T13:56:37Z) - Shuffled Differentially Private Federated Learning for Time Series Data
Analytics [10.198481976376717]
We develop a privacy-preserving federated learning algorithm for time series data.
Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients.
We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy.
arXiv Detail & Related papers (2023-07-30T10:30:38Z) - Incentivising the federation: gradient-based metrics for data selection and valuation in private decentralised training [15.233103072063951]
We investigate how to leverage gradient information to permit the participants of private training settings to select the data most beneficial for the jointly trained model.
We show that these techniques can provide the federated clients with tools for principled data selection even in stricter privacy settings.
arXiv Detail & Related papers (2023-05-04T15:44:56Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Neither Private Nor Fair: Impact of Data Imbalance on Utility and
Fairness in Differential Privacy [5.416049433853457]
We study how different levels of imbalance in the data affect the accuracy and the fairness of the decisions made by the model.
We demonstrate that even small imbalances and loose privacy guarantees can cause disparate impacts.
arXiv Detail & Related papers (2020-09-10T18:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.