On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks
- URL: http://arxiv.org/abs/2403.19510v3
- Date: Mon, 15 Jul 2024 01:56:48 GMT
- Title: On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks
- Authors: Xiaoguang Li, Zitao Li, Ninghui Li, Wenhai Sun,
- Abstract summary: Local differential privacy (LDP) protocols are vulnerable to data poisoning attacks.
This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.
- Score: 17.351593328097977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies reveal that local differential privacy (LDP) protocols are vulnerable to data poisoning attacks where an attacker can manipulate the final estimate on the server by leveraging the characteristics of LDP and sending carefully crafted data from a small fraction of controlled local clients. This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments. In this paper, we conduct a systematic investigation of the robustness of state-of-the-art LDP protocols for numerical attributes, i.e., categorical frequency oracles (CFOs) with binning and consistency, and distribution reconstruction. We evaluate protocol robustness through an attack-driven approach and propose new metrics for cross-protocol attack gain measurement. The results indicate that Square Wave and CFO-based protocols in the Server setting are more robust against the attack compared to the CFO-based protocols in the User setting. Our evaluation also unfolds new relationships between LDP security and its inherent design choices. We found that the hash domain size in local-hashing-based LDP has a profound impact on protocol robustness beyond the well-known effect on utility. Further, we propose a zero-shot attack detection by leveraging the rich reconstructed distribution information. The experiment show that our detection significantly improves the existing methods and effectively identifies data manipulation in challenging scenarios.
Related papers
- Benchmarking Secure Sampling Protocols for Differential Privacy [3.0325535716232404]
Two well-known models of Differential Privacy (DP) are the central model and the local model.
Recently, many studies have proposed to achieve DP with Secure Multi-party Computation (MPC) in distributed settings.
arXiv Detail & Related papers (2024-09-16T19:04:47Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols [13.31395140464466]
Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy.
Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning.
Recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks.
arXiv Detail & Related papers (2024-06-27T18:11:19Z) - Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation [62.2436697657307]
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data.
We propose a method called Stratified Prediction-Powered Inference (StratPPI)
We show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies.
arXiv Detail & Related papers (2024-06-06T17:37:39Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - Sketches-based join size estimation under local differential privacy [3.0945730947183203]
Join size estimation on sensitive data poses a risk of privacy leakage.
Local differential privacy (LDP) is a solution to preserve privacy while collecting sensitive data.
We introduce a novel algorithm called LDPJoinSketch for sketch-based join size estimation under LDP.
arXiv Detail & Related papers (2024-05-19T01:21:54Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - Revealing the True Cost of Locally Differentially Private Protocols: An Auditing Perspective [4.5282933786221395]
We introduce the LDP-Auditor framework for empirically estimating the privacy loss of locally differentially private mechanisms.
We extensively explore the factors influencing the privacy audit, such as the impact of different encoding and perturbation functions.
We present a notable achievement of our LDP-Auditor framework, which is the discovery of a bug in a state-of-the-art LDP Python package.
arXiv Detail & Related papers (2023-09-04T13:29:19Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - Round-robin differential phase-time-shifting protocol for quantum key
distribution: theory and experiment [58.03659958248968]
Quantum key distribution (QKD) allows the establishment of common cryptographic keys among distant parties.
Recently, a QKD protocol that circumvents the need for monitoring signal disturbance, has been proposed and demonstrated in initial experiments.
We derive the security proofs of the round-robin differential phase-time-shifting protocol in the collective attack scenario.
Our results show that the RRDPTS protocol can achieve higher secret key rate in comparison with the RRDPS, in the condition of high quantum bit error rate.
arXiv Detail & Related papers (2021-03-15T15:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.