Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
Adversarial Examples
- URL: http://arxiv.org/abs/2307.10562v1
- Date: Thu, 20 Jul 2023 03:56:04 GMT
- Title: Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
Adversarial Examples
- Authors: Shaokui Wei, Mingda Zhang, Hongyuan Zha, Baoyuan Wu
- Abstract summary: Backdoor attacks are serious security threats to machine learning models.
In this paper, we explore the task of purifying a backdoored model using a small clean dataset.
By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk.
- Score: 67.66153875643964
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Backdoor attacks are serious security threats to machine learning models
where an adversary can inject poisoned samples into the training set, causing a
backdoored model which predicts poisoned samples with particular triggers to
particular target classes, while behaving normally on benign samples. In this
paper, we explore the task of purifying a backdoored model using a small clean
dataset. By establishing the connection between backdoor risk and adversarial
risk, we derive a novel upper bound for backdoor risk, which mainly captures
the risk on the shared adversarial examples (SAEs) between the backdoored model
and the purified model. This upper bound further suggests a novel bi-level
optimization problem for mitigating backdoor using adversarial training
techniques. To solve it, we propose Shared Adversarial Unlearning (SAU).
Specifically, SAU first generates SAEs, and then, unlearns the generated SAEs
such that they are either correctly classified by the purified model and/or
differently classified by the two models, such that the backdoor effect in the
backdoored model will be mitigated in the purified model. Experiments on
various benchmark datasets and network architectures show that our proposed
method achieves state-of-the-art performance for backdoor defense.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Towards Unified Robustness Against Both Backdoor and Adversarial Attacks [31.846262387360767]
Deep Neural Networks (DNNs) are known to be vulnerable to both backdoor and adversarial attacks.
This paper reveals that there is an intriguing connection between backdoor and adversarial attacks.
A novel Progressive Unified Defense algorithm is proposed to defend against backdoor and adversarial attacks simultaneously.
arXiv Detail & Related papers (2024-05-28T07:50:00Z) - Partial train and isolate, mitigate backdoor attack [6.583682264938882]
We provide a new model training method (PT) that freezes part of the model to train a model that can isolate suspicious samples.
Then, on this basis, a clean model is fine-tuned to resist backdoor attacks.
arXiv Detail & Related papers (2024-05-26T08:54:43Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - Universal Soldier: Using Universal Adversarial Perturbations for
Detecting Backdoor Attacks [15.917794562400449]
A deep learning model may be poisoned by training with backdoored data or by modifying inner network parameters.
It is difficult to distinguish between clean and backdoored models without prior knowledge of the trigger.
We propose a novel method called Universal Soldier for Backdoor detection (USB) and reverse engineering potential backdoor triggers via UAPs.
arXiv Detail & Related papers (2023-02-01T20:47:58Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.