Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data
- URL: http://arxiv.org/abs/2503.07483v1
- Date: Thu, 06 Mar 2025 02:31:45 GMT
- Title: Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data
- Authors: I-Jung Hsu, Chih-Hsun Lin, Chia-Mu Yu, Sy-Yen Kuo, Chun-Ying Huang,
- Abstract summary: Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications.<n>Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it.<n>Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results.
- Score: 14.934626547047763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications. However, collecting such sensitive data raises considerable privacy concerns. Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it. Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results. In this work, we make the first attempt to analyze vulnerabilities in several representative LDP trajectory protocols. We propose \textsc{TraP}, a heuristic algorithm for data \underline{P}oisoning attacks using a prefix-suffix method to optimize fake \underline{Tra}jectory selection, significantly reducing computational complexity. Our experimental results demonstrate that our attack can substantially increase target pattern occurrences in the perturbed trajectory dataset with few fake users. This study underscores the urgent need for robust defenses and better protocol designs to safeguard LDP trajectory data against malicious manipulation.
Related papers
- DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks [101.52204404377039]
LLM-integrated applications and agents are vulnerable to prompt injection attacks.
A detection method aims to determine whether a given input is contaminated by an injected prompt.
We propose DataSentinel, a game-theoretic method to detect prompt injection attacks.
arXiv Detail & Related papers (2025-04-15T16:26:21Z) - PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning Against Data Poisoning Attacks on Non-IID Data [24.84385720209427]
Privacy-Preserving Federated Learning allows multiple clients to collaboratively train a deep learning model by submitting hidden model updates.
Existing solutions have struggled to improve the performance of cross-silo PPFL in poisoned Non-IID data.
This paper proposes a privacy-preserving federated prototype learning framework, named PPFPL, which enhances the cross-silo FL performance in poisoned Non-IID data.
arXiv Detail & Related papers (2025-04-04T05:05:24Z) - Data Poisoning Attacks to Locally Differentially Private Range Query Protocols [15.664794320925562]
Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection.<n>Recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks.<n>We present the first study on data poisoning attacks targeting LDP range query protocols.
arXiv Detail & Related papers (2025-03-05T12:40:34Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols [13.31395140464466]
Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy.
Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning.
Recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks.
arXiv Detail & Related papers (2024-06-27T18:11:19Z) - On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks [17.351593328097977]
Local differential privacy (LDP) protocols are vulnerable to data poisoning attacks.<n>This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.
arXiv Detail & Related papers (2024-03-28T15:43:38Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Protecting Model Adaptation from Trojans in the Unlabeled Data [120.42853706967188]
This paper explores the potential trojan attacks on model adaptation launched by well-designed poisoning target data.<n>We propose a plug-and-play method named DiffAdapt, which can be seamlessly integrated with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - A Trajectory K-Anonymity Model Based on Point Density and Partition [0.0]
This paper develops a trajectory K-anonymity model based on Point Density and Partition (K PDP)
It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset.
arXiv Detail & Related papers (2023-07-31T17:10:56Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Unsupervised Finetuning [80.58625921631506]
We propose two strategies to combine source and target data into unsupervised finetuning.
The motivation of the former strategy is to add a small portion of source data back to occupy their pretrained representation space.
The motivation of the latter strategy is to increase the data density and help learn more compact representation.
arXiv Detail & Related papers (2021-10-18T17:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.