Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection
- URL: http://arxiv.org/abs/2007.05225v1
- Date: Fri, 10 Jul 2020 07:58:35 GMT
- Title: Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection
- Authors: Qingsong Yao, Zecheng He, Hu Han and S. Kevin Zhou
- Abstract summary: This paper is the first to study how fragile a CNN-based model on multiple landmark detection to adversarial perturbations.
We propose a novel Adaptive Targeted Iterative FGSM attack against the state-of-the-art models in multiple landmark detection.
- Score: 29.83857022733448
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent methods in multiple landmark detection based on deep convolutional
neural networks (CNNs) reach high accuracy and improve traditional clinical
workflow. However, the vulnerability of CNNs to adversarial-example attacks can
be easily exploited to break classification and segmentation tasks. This paper
is the first to study how fragile a CNN-based model on multiple landmark
detection to adversarial perturbations. Specifically, we propose a novel
Adaptive Targeted Iterative FGSM (ATI-FGSM) attack against the state-of-the-art
models in multiple landmark detection. The attacker can use ATI-FGSM to
precisely control the model predictions of arbitrarily selected landmarks,
while keeping other stationary landmarks still, by adding imperceptible
perturbations to the original image. A comprehensive evaluation on a public
dataset for cephalometric landmark detection demonstrates that the adversarial
examples generated by ATI-FGSM break the CNN-based network more effectively and
efficiently, compared with the original Iterative FGSM attack. Our work reveals
serious threats to patients' health. Furthermore, we discuss the limitations of
our method and provide potential defense directions, by investigating the
coupling effect of nearby landmarks, i.e., a major source of divergence in our
experiments. Our source code is available at
https://github.com/qsyao/attack_landmark_detection.
Related papers
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning [38.47921452675418]
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model.
Recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them.
We propose FLTracer, the first FL attack framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates.
arXiv Detail & Related papers (2023-10-20T11:24:38Z) - New Adversarial Image Detection Based on Sentiment Analysis [37.139957973240264]
adversarial attack models, e.g., DeepFool, are on the rise and outrunning adversarial example detection techniques.
This paper presents a new adversarial example detector that outperforms state-of-the-art detectors in identifying the latest adversarial attacks on image datasets.
arXiv Detail & Related papers (2023-05-03T14:32:21Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps [0.3437656066916039]
adversarial attacks on convolutional neural networks (CNN)
In this work, we propose a novel detection method for adversarial examples to prevent attacks.
We do so by tracking adversarial perturbations in feature responses, allowing for automatic detection using average local spatial entropy.
arXiv Detail & Related papers (2022-08-24T11:05:04Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Learning to Detect Adversarial Examples Based on Class Scores [0.8411385346896413]
We take a closer look at adversarial attack detection based on the class scores of an already trained classification model.
We propose to train a support vector machine (SVM) on the class scores to detect adversarial examples.
We show that our approach yields an improved detection rate compared to an existing method, whilst being easy to implement.
arXiv Detail & Related papers (2021-07-09T13:29:54Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.