Differential Privacy in Federated Learning: Mitigating Inference Attacks with Randomized Response
- URL: http://arxiv.org/abs/2509.13987v1
- Date: Wed, 17 Sep 2025 13:59:38 GMT
- Title: Differential Privacy in Federated Learning: Mitigating Inference Attacks with Randomized Response
- Authors: Ozer Ozturk, Busra Buyuktanir, Gozde Karatas Baydogmus, Kazim Yildiz,
- Abstract summary: In machine learning, each client trains a local model using its own data. The trained models are periodically transmitted to the central server.<n>The server then combines the received models using federated aggregation algorithms to obtain a global model.<n>This global model is distributed back to the clients, and the process continues in a cyclical manner.<n>Attackers can perform inference attacks on the obtained models to approximate the training dataset, potentially causing data leakage.
- Score: 1.9949261242626626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models used for distributed architectures consisting of servers and clients require large amounts of data to achieve high accuracy. Data obtained from clients are collected on a central server for model training. However, storing data on a central server raises concerns about security and privacy. To address this issue, a federated learning architecture has been proposed. In federated learning, each client trains a local model using its own data. The trained models are periodically transmitted to the central server. The server then combines the received models using federated aggregation algorithms to obtain a global model. This global model is distributed back to the clients, and the process continues in a cyclical manner. Although preventing data from leaving the clients enhances security, certain concerns still remain. Attackers can perform inference attacks on the obtained models to approximate the training dataset, potentially causing data leakage. In this study, differential privacy was applied to address the aforementioned security vulnerability, and a performance analysis was conducted. The Data-Unaware Classification Based on Association (duCBA) algorithm was used as the federated aggregation method. Differential privacy was implemented on the data using the Randomized Response technique, and the trade-off between security and performance was examined under different epsilon values. As the epsilon value decreased, the model accuracy declined, and class prediction imbalances were observed. This indicates that higher levels of privacy do not always lead to practical outcomes and that the balance between security and performance must be carefully considered.
Related papers
- Decentralized Privacy-Preserving Federal Learning of Computer Vision Models on Edge Devices [0.0]
Collaborative training of a machine learning model comes with a risk of sharing sensitive or private data.<n> Federated learning offers a way of collectively training a single global model without the need to share client data.<n>We have analyzed the negative effects of gradient compression and gradient noising on the accuracy of convolutional neural networks used for classification.
arXiv Detail & Related papers (2026-01-08T13:10:33Z) - Enhancing Federated Learning Privacy with QUBO [0.0]
Federated learning (FL) is a widely used method for training machine learning (ML) models in a scalable way while preserving privacy.<n>In this paper, we focus on two threat vectors: information leakage by clients during training and adversaries who can query or obtain the global model.<n> Experiments on the MNIST dataset with 300 clients in 20 rounds showed a 95.2% per-round and 49% cumulative privacy exposure reduction.
arXiv Detail & Related papers (2025-11-04T18:06:30Z) - Perfectly-Private Analog Secure Aggregation in Federated Learning [51.61616734974475]
In federated learning, multiple parties train models locally and share their parameters with a central server, which aggregates them to update a global model.<n>In this paper, a novel secure parameter aggregation method is proposed that employs the torus rather than a finite field.
arXiv Detail & Related papers (2025-09-10T15:22:40Z) - Tackling Federated Unlearning as a Parameter Estimation Problem [2.9085589574462816]
This work introduces an efficient Federated Unlearning framework based on information theory.<n>Our method uses second-order Hessian information to identify and selectively reset only the parameters most sensitive to the data being forgotten.
arXiv Detail & Related papers (2025-08-26T14:24:45Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.