Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols
- URL: http://arxiv.org/abs/2406.19466v1
- Date: Thu, 27 Jun 2024 18:11:19 GMT
- Title: Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols
- Authors: Wei Tong, Haoyu Chen, Jiacheng Niu, Sheng Zhong,
- Abstract summary: Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy.
Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning.
Recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks.
- Score: 13.31395140464466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy. Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning. Despite its privacy-preserving properties, recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks. However, existing data poisoning attacks are focused on basic statistics under LDP, such as frequency estimation and mean/variance estimation. As an important data analysis task, the security of LDP frequent itemset mining has yet to be thoroughly examined. In this paper, we aim to address this issue by presenting novel and practical data poisoning attacks against LDP frequent itemset mining protocols. By introducing a unified attack framework with composable attack operations, our data poisoning attack can successfully manipulate the state-of-the-art LDP frequent itemset mining protocols and has the potential to be adapted to other protocols with similar structures. We conduct extensive experiments on three datasets to compare the proposed attack with four baseline attacks. The results demonstrate the severity of the threat and the effectiveness of the proposed attack.
Related papers
- UTrace: Poisoning Forensics for Private Collaborative Learning [8.161400729150209]
We introduce UTrace, a framework for User-level Traceback of poisoning attacks in machine learning (PPML)
UTrace is effective at low poisoning rates and is resilient to poisoning attacks distributed across multiple data owners.
We show UTrace's effectiveness against 10 poisoning attacks.
arXiv Detail & Related papers (2024-09-23T15:32:46Z) - On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks [17.351593328097977]
Local differential privacy (LDP) protocols are vulnerable to data poisoning attacks.
This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.
arXiv Detail & Related papers (2024-03-28T15:43:38Z) - LDPRecover: Recovering Frequencies from Poisoning Attacks against Local Differential Privacy [11.180950585234305]
Local differential privacy (LDP) protocols for frequency estimation are vulnerable to poisoning attacks.
We propose LDPRecover, a method that can recover accurate aggregated frequencies from poisoned attacks.
Our evaluation shows that LDPRecover is both accurate and widely applicable against various poisoning attacks.
arXiv Detail & Related papers (2024-03-14T12:57:20Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Temporal Robustness against Data Poisoning [69.01705108817785]
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
We propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted.
arXiv Detail & Related papers (2023-02-07T18:59:19Z) - Defening against Adversarial Denial-of-Service Attacks [0.0]
Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies.
We propose a new approach of detecting DoS poisoned instances.
We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances.
arXiv Detail & Related papers (2021-04-14T09:52:36Z) - Defending Regression Learners Against Poisoning Attacks [25.06658793731661]
We introduce a novel Local Intrinsic Dimensionality (LID) based measure called N-LID that measures the local deviation of a given data point's LID with respect to its neighbors.
N-LID can distinguish poisoned samples from normal samples and propose an N-LID based defense approach that makes no assumptions of the attacker.
We show that the proposed defense mechanism outperforms the state of the art defenses in terms of prediction accuracy (up to 76% lower MSE compared to an undefended ridge model) and running time.
arXiv Detail & Related papers (2020-08-21T03:02:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.