RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
DNN
- URL: http://arxiv.org/abs/2208.10608v1
- Date: Mon, 22 Aug 2022 21:27:09 GMT
- Title: RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
DNN
- Authors: Huy Phan, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming
Zhao, Jian Liu, Yan Wang, Yingying Chen, Bo Yuan
- Abstract summary: Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models.
In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC)
- Score: 28.94653593443991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently backdoor attack has become an emerging threat to the security of
deep neural network (DNN) models. To date, most of the existing studies focus
on backdoor attack against the uncompressed model; while the vulnerability of
compressed DNNs, which are widely used in the practical applications, is little
exploited yet. In this paper, we propose to study and develop Robust and
Imperceptible Backdoor Attack against Compact DNN models (RIBAC). By performing
systematic analysis and exploration on the important design knobs, we propose a
framework that can learn the proper trigger patterns, model parameters and
pruning masks in an efficient way. Thereby achieving high trigger stealthiness,
high attack success rate and high model efficiency simultaneously. Extensive
evaluations across different datasets, including the test against the
state-of-the-art defense mechanisms, demonstrate the high robustness,
stealthiness and model efficiency of RIBAC. Code is available at
https://github.com/huyvnphan/ECCV2022-RIBAC
Related papers
- BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data [14.415796842972563]
We present a comprehensive analysis of backdoor attacks on tabular data using Deep Neural Networks (DNNs)
We propose a novel approach for trigger construction: an in-bounds attack, which provides excellent attack performance while maintaining stealthiness.
Our results demonstrate up to 100% attack success rate with negligible clean accuracy drop.
arXiv Detail & Related papers (2023-11-13T18:39:44Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware
Minimization [27.964431092997504]
Fine-tuning based on benign data is a natural defense to erase the backdoor effect in a backdoored model.
We propose FTSAM, a novel backdoor defense paradigm that aims to shrink the norms of backdoor-related neurons by incorporating sharpness-aware minimization with fine-tuning.
arXiv Detail & Related papers (2023-04-24T05:13:52Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - RAB: Provable Robustness Against Backdoor Attacks [20.702977915926787]
We focus on certifying the machine learning model robustness against general threat models, especially backdoor attacks.
We propose the first robust training process, RAB, to smooth the trained model and certify its robustness against backdoor attacks.
We conduct comprehensive experiments for different machine learning (ML) models and provide the first benchmark for certified robustness against backdoor attacks.
arXiv Detail & Related papers (2020-03-19T17:05:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.