Data Poisoning Attacks to Locally Differentially Private Range Query Protocols
- URL: http://arxiv.org/abs/2503.03454v2
- Date: Thu, 06 Mar 2025 14:25:03 GMT
- Title: Data Poisoning Attacks to Locally Differentially Private Range Query Protocols
- Authors: Ting-Wei Liao, Chih-Hsun Lin, Yu-Lin Tsai, Takao Murakami, Chia-Mu Yu, Jun Sakuma, Chun-Ying Huang, Hiroaki Kikuchi,
- Abstract summary: Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection.<n>Recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks.<n>We present the first study on data poisoning attacks targeting LDP range query protocols.
- Score: 15.664794320925562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection. However, recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks, where malicious users manipulate their reported data to distort aggregated results. In this work, we present the first study on data poisoning attacks targeting LDP range query protocols, focusing on both tree-based and grid-based approaches. We identify three key challenges in executing such attacks, including crafting consistent and effective fake data, maintaining data consistency across levels or grids, and preventing server detection. To address the first two challenges, we propose novel attack methods that are provably optimal, including a tree-based attack and a grid-based attack, designed to manipulate range query results with high effectiveness. \textbf{Our key finding is that the common post-processing procedure, Norm-Sub, in LDP range query protocols can help the attacker massively amplify their attack effectiveness.} In addition, we study a potential countermeasure, but also propose an adaptive attack capable of evading this defense to address the third challenge. We evaluate our methods through theoretical analysis and extensive experiments on synthetic and real-world datasets. Our results show that the proposed attacks can significantly amplify estimations for arbitrary range queries by manipulating a small fraction of users, providing 5-10x more influence than a normal user to the estimation.
Related papers
- DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks [101.52204404377039]
LLM-integrated applications and agents are vulnerable to prompt injection attacks.
A detection method aims to determine whether a given input is contaminated by an injected prompt.
We propose DataSentinel, a game-theoretic method to detect prompt injection attacks.
arXiv Detail & Related papers (2025-04-15T16:26:21Z) - Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data [14.934626547047763]
Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications.
Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it.
Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results.
arXiv Detail & Related papers (2025-03-06T02:31:45Z) - Sub-optimal Learning in Meta-Classifier Attacks: A Study of Membership Inference on Differentially Private Location Aggregates [19.09251452596829]
We show that a significant gap exists between the expected attack accuracy given by DP and the empirical attack accuracy even with informed attackers.
We propose two new metric-based MIAs: the one-threshold attack and the two-threshold attack.
arXiv Detail & Related papers (2024-12-29T12:51:34Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols [13.31395140464466]
Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy.
Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning.
Recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks.
arXiv Detail & Related papers (2024-06-27T18:11:19Z) - On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks [17.351593328097977]
Local differential privacy (LDP) protocols are vulnerable to data poisoning attacks.<n>This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.
arXiv Detail & Related papers (2024-03-28T15:43:38Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Privacy Attacks in Decentralized Learning [10.209045867868674]
Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data.
We propose the first attack against D-GD that enables a user to reconstruct the private data of other users outside their immediate neighborhood.
We validate the effectiveness of our attack on real graphs and datasets, showing that the number of users compromised by a single or a handful of attackers is often surprisingly large.
arXiv Detail & Related papers (2024-02-15T15:06:33Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - Protecting Model Adaptation from Trojans in the Unlabeled Data [120.42853706967188]
This paper explores the potential trojan attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named DiffAdapt, which can be seamlessly integrated with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Unsupervised Finetuning [80.58625921631506]
We propose two strategies to combine source and target data into unsupervised finetuning.
The motivation of the former strategy is to add a small portion of source data back to occupy their pretrained representation space.
The motivation of the latter strategy is to increase the data density and help learn more compact representation.
arXiv Detail & Related papers (2021-10-18T17:57:05Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.