Fine-grained Manipulation Attacks to Local Differential Privacy Protocols for Data Streams
- URL: http://arxiv.org/abs/2505.01292v1
- Date: Fri, 02 May 2025 14:09:56 GMT
- Title: Fine-grained Manipulation Attacks to Local Differential Privacy Protocols for Data Streams
- Authors: Xinyu Li, Xuebin Ren, Shusen Yang, Liang Shi, Chia-Mu Yu,
- Abstract summary: Local Differential Privacy (LDP) enables massive data collection and analysis while protecting users' privacy.<n>Recent findings indicate that LDP protocols can be easily disrupted by poisoning or manipulation attacks.<n>Our research fills the gap by developing novel fine-grained manipulation attacks to LDP protocols for data streams.
- Score: 19.89063520419922
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Local Differential Privacy (LDP) enables massive data collection and analysis while protecting end users' privacy against untrusted aggregators. It has been applied to various data types (e.g., categorical, numerical, and graph data) and application settings (e.g., static and streaming). Recent findings indicate that LDP protocols can be easily disrupted by poisoning or manipulation attacks, which leverage injected/corrupted fake users to send crafted data conforming to the LDP reports. However, current attacks primarily target static protocols, neglecting the security of LDP protocols in the streaming settings. Our research fills the gap by developing novel fine-grained manipulation attacks to LDP protocols for data streams. By reviewing the attack surfaces in existing algorithms, We introduce a unified attack framework with composable modules, which can manipulate the LDP estimated stream toward a target stream. Our attack framework can adapt to state-of-the-art streaming LDP algorithms with different analytic tasks (e.g., frequency and mean) and LDP models (event-level, user-level, w-event level). We validate our attacks theoretically and through extensive experiments on real-world datasets, and finally explore a possible defense mechanism for mitigating these attacks.
Related papers
- Mitigating Data Poisoning Attacks to Local Differential Privacy [14.050238622718798]
We propose a comprehensive mitigation framework for popular frequency estimation, which contains a suite of novel defenses.<n>For detection, we present a new method to precisely identify bogus reports and thus LDP aggregation can be performed over the clean'' data.<n>When the attack behavior becomes stealthy and direct filtering out malicious users is difficult, we propose a detection that can effectively recognize hidden adversarial patterns.
arXiv Detail & Related papers (2025-06-02T18:37:15Z) - Defending against Indirect Prompt Injection by Instruction Detection [81.98614607987793]
We propose a novel approach that takes external data as input and leverages the behavioral state of LLMs during both forward and backward propagation to detect potential IPI attacks.<n>Our approach achieves a detection accuracy of 99.60% in the in-domain setting and 96.90% in the out-of-domain setting, while reducing the attack success rate to just 0.12% on the BIPIA benchmark.
arXiv Detail & Related papers (2025-05-08T13:04:45Z) - Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data [14.934626547047763]
Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications.<n>Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it.<n>Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results.
arXiv Detail & Related papers (2025-03-06T02:31:45Z) - Data Poisoning Attacks to Locally Differentially Private Range Query Protocols [15.664794320925562]
Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection.<n>Recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks.<n>We present the first study on data poisoning attacks targeting LDP range query protocols.
arXiv Detail & Related papers (2025-03-05T12:40:34Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.<n>The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.<n>We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols [13.31395140464466]
Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy.
Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning.
Recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks.
arXiv Detail & Related papers (2024-06-27T18:11:19Z) - On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks [17.351593328097977]
Local differential privacy (LDP) protocols are vulnerable to data poisoning attacks.<n>This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.
arXiv Detail & Related papers (2024-03-28T15:43:38Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Lossless Compression of Efficient Private Local Randomizers [55.657133416044104]
Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting.
In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server.
This has led to significant efforts on reducing the communication cost of LDP algorithms.
arXiv Detail & Related papers (2021-02-24T07:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.