Backdoor attacks and defenses in feature-partitioned collaborative
learning
- URL: http://arxiv.org/abs/2007.03608v1
- Date: Tue, 7 Jul 2020 16:45:20 GMT
- Title: Backdoor attacks and defenses in feature-partitioned collaborative
learning
- Authors: Yang Liu, Zhihao Yi, Tianjian Chen
- Abstract summary: We show that even parties with no access to labels can successfully inject backdoor attacks.
This is the first systematical study to deal with backdoor attacks in the feature-partitioned collaborative learning framework.
- Score: 11.162867684516995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since there are multiple parties in collaborative learning, malicious parties
might manipulate the learning process for their own purposes through backdoor
attacks. However, most of existing works only consider the federated learning
scenario where data are partitioned by samples. The feature-partitioned
learning can be another important scenario since in many real world
applications, features are often distributed across different parties. Attacks
and defenses in such scenario are especially challenging when the attackers
have no labels and the defenders are not able to access the data and model
parameters of other participants. In this paper, we show that even parties with
no access to labels can successfully inject backdoor attacks, achieving high
accuracy on both main and backdoor tasks. Next, we introduce several defense
techniques, demonstrating that the backdoor can be successfully blocked by a
combination of these techniques without hurting main task accuracy. To the best
of our knowledge, this is the first systematical study to deal with backdoor
attacks in the feature-partitioned collaborative learning framework.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - On Feasibility of Server-side Backdoor Attacks on Split Learning [5.559334420715782]
Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private.
Recent studies demonstrate that collaborative learning models are vulnerable to security and privacy attacks such as model inference and backdoor attacks.
This paper performs a novel backdoor attack on split learning and studies its effectiveness.
arXiv Detail & Related papers (2023-02-19T14:06:08Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Excess Capacity and Backdoor Poisoning [11.383869751239166]
A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set.
We present a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems.
arXiv Detail & Related papers (2021-09-02T03:04:38Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.