Model X-ray:Detect Backdoored Models via Decision Boundary
- URL: http://arxiv.org/abs/2402.17465v1
- Date: Tue, 27 Feb 2024 12:42:07 GMT
- Title: Model X-ray:Detect Backdoored Models via Decision Boundary
- Authors: Yanghao Su, Jie Zhang, Ting Xu, Tianwei Zhang, Weiming Zhang, Nenghai
Yu
- Abstract summary: Deep neural networks (DNNs) have revolutionized various industries, leading to the rise of Machine Learning as a Service (ML)
DNNs are susceptible to backdoor attacks, which pose significant risks to their applications.
We propose Model X-ray, a novel backdoor detection approach for ML through the analysis of decision boundaries.
- Score: 66.41173675107886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have revolutionized various industries, leading
to the rise of Machine Learning as a Service (MLaaS). In this paradigm,
well-trained models are typically deployed through APIs. However, DNNs are
susceptible to backdoor attacks, which pose significant risks to their
applications. This vulnerability necessitates a method for users to ascertain
whether an API is compromised before usage. Although many backdoor detection
methods have been developed, they often operate under the assumption that the
defender has access to specific information such as details of the attack, soft
predictions from the model API, and even the knowledge of the model parameters,
limiting their practicality in MLaaS scenarios. To address it, in this paper,
we begin by presenting an intriguing observation: the decision boundary of the
backdoored model exhibits a greater degree of closeness than that of the clean
model. Simultaneously, if only one single label is infected, a larger portion
of the regions will be dominated by the attacked label. Building upon this
observation, we propose Model X-ray, a novel backdoor detection approach for
MLaaS through the analysis of decision boundaries. Model X-ray can not only
identify whether the target API is infected by backdoor attacks but also
determine the target attacked label under the all-to-one attack strategy.
Importantly, it accomplishes this solely by the hard prediction of clean
inputs, regardless of any assumptions about attacks and prior knowledge of the
training details of the model. Extensive experiments demonstrated that Model
X-ray can be effective for MLaaS across diverse backdoor attacks, datasets, and
architectures.
Related papers
- BURN: Backdoor Unlearning via Adversarial Boundary Analysis [73.14147934175604]
Backdoor unlearning aims to remove backdoor-related information while preserving the model's original functionality.<n>We propose Backdoor Unlearning via adversaRial bouNdary analysis (BURN), a novel defense framework that integrates false correlation decoupling, progressive data refinement, and model purification.
arXiv Detail & Related papers (2025-07-14T17:13:06Z) - Explainer-guided Targeted Adversarial Attacks against Binary Code Similarity Detection Models [12.524811181751577]
We propose a novel optimization for adversarial attacks against BCSD models.<n>In particular, we aim to improve the attacks in a challenging scenario, where the attack goal is to limit the model predictions to a specific range.<n>Our attack leverages the superior capability of black-box, model-agnostic explainers in interpreting the model decision boundaries.
arXiv Detail & Related papers (2025-06-05T08:29:19Z) - Variance-Based Defense Against Blended Backdoor Attacks [0.0]
Backdoor attacks represent a subtle yet effective class of cyberattacks targeting AI models.<n>We propose a novel defense method that trains a model on the given dataset, detects poisoned classes, and extracts the critical part of the attack trigger.
arXiv Detail & Related papers (2025-06-02T09:01:35Z) - Backdoor Cleaning without External Guidance in MLLM Fine-tuning [76.82121084745785]
Believe Your Eyes (BYE) is a data filtering framework that leverages attention entropy patterns as self-supervised signals to identify and filter backdoor samples.<n>It achieves near-zero attack success rates while maintaining clean-task performance.
arXiv Detail & Related papers (2025-05-22T17:11:58Z) - Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models [70.03122709795122]
Previous backdoor detection methods primarily focus on the static features of backdoor samples.
This study introduces a novel backdoor detection perspective named Dynamic Attention Analysis (DAA), showing that these dynamic characteristics serve as better indicators for backdoor detection.
Our approach significantly surpasses existing detection methods, achieving an average F1 Score of 79.49% and an AUC of 87.67%.
arXiv Detail & Related papers (2025-04-29T07:59:35Z) - Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detection [20.67964977754179]
We investigate the potential of image representations for morphing attack detection (MAD)
We develop supervised detectors by training a simple binary linear SVM on the extracted features and one-class detectors by modeling the distribution of bonafide features with a Gaussian Mixture Model (GMM)
Our results indicate that attack-agnostic features can effectively detect morphing attacks, outperforming traditional supervised and one-class detectors from the literature in most scenarios.
arXiv Detail & Related papers (2024-10-22T08:27:43Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift [104.76588209308666]
This paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains.
We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness.
We propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models [19.46962670935554]
Diffusion models are vulnerable to backdoor attacks.
We propose a black-box input-level backdoor detection framework on diffusion models, called UFID.
Our method achieves superb performance on detection effectiveness and run-time efficiency.
arXiv Detail & Related papers (2024-04-01T13:21:05Z) - Backdoor Attack against One-Class Sequential Anomaly Detection Models [10.020488631167204]
We explore compromising deep sequential anomaly detection models by proposing a novel backdoor attack strategy.
The attack approach comprises two primary steps, trigger generation and backdoor injection.
Experiments demonstrate the effectiveness of our proposed attack strategy by injecting backdoors on two well-established one-class anomaly detection models.
arXiv Detail & Related papers (2024-02-15T19:19:54Z) - DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models [23.502100653704446]
Some pioneering works have shown the vulnerability of the diffusion model against backdoor attacks.
In this paper, for the first time, we explore the detectability of the poisoned noise input for the backdoored diffusion models.
We propose a low-cost trigger detection mechanism that can effectively identify the poisoned input noise.
We then take a further step to study the same problem from the attack side, proposing a backdoor attack strategy that can learn the unnoticeable trigger.
arXiv Detail & Related papers (2024-02-05T05:46:31Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Model Extraction and Defenses on Generative Adversarial Networks [0.9442139459221782]
We study the feasibility of model extraction attacks against generative adversarial networks (GANs)
We propose effective defense techniques to safeguard GANs, considering a trade-off between the utility and security of GAN models.
arXiv Detail & Related papers (2021-01-06T14:36:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.