Understanding Object Detection Through An Adversarial Lens
- URL: http://arxiv.org/abs/2007.05828v1
- Date: Sat, 11 Jul 2020 18:41:47 GMT
- Title: Understanding Object Detection Through An Adversarial Lens
- Authors: Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei,
Yanzhao Wu
- Abstract summary: This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
- Score: 14.976840260248913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks based object detection models have revolutionized
computer vision and fueled the development of a wide range of visual
recognition applications. However, recent studies have revealed that deep
object detectors can be compromised under adversarial attacks, causing a victim
detector to detect no object, fake objects, or mislabeled objects. With object
detection being used pervasively in many security-critical applications, such
as autonomous vehicles and smart cities, we argue that a holistic approach for
an in-depth understanding of adversarial attacks and vulnerabilities of deep
object detection systems is of utmost importance for the research community to
develop robust defense mechanisms. This paper presents a framework for
analyzing and evaluating vulnerabilities of the state-of-the-art object
detectors under an adversarial lens, aiming to analyze and demystify the attack
strategies, adverse effects, and costs, as well as the cross-model and
cross-resolution transferability of attacks. Using a set of quantitative
metrics, extensive experiments are performed on six representative deep object
detectors from three popular families (YOLOv3, SSD, and Faster R-CNN) with two
benchmark datasets (PASCAL VOC and MS COCO). We demonstrate that the proposed
framework can serve as a methodical benchmark for analyzing adversarial
behaviors and risks in real-time object detection systems. We conjecture that
this framework can also serve as a tool to assess the security risks and the
adversarial robustness of deep object detectors to be deployed in real-world
applications.
Related papers
- A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions.
This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - Object criticality for safer navigation [1.565361244756411]
Given an object detector, filtering objects based on their relevance, reduces the risk of missing relevant objects, decreases the likelihood of dangerous trajectories, and improves the quality of trajectories in general.
We show that, given an object detector, filtering objects based on their relevance, reduces the risk of missing relevant objects, decreases the likelihood of dangerous trajectories, and improves the quality of trajectories in general.
arXiv Detail & Related papers (2024-04-25T09:02:22Z) - Object Detectors in the Open Environment: Challenges, Solutions, and Outlook [95.3317059617271]
The dynamic and intricate nature of the open environment poses novel and formidable challenges to object detectors.
This paper aims to conduct a comprehensive review and analysis of object detectors in open environments.
We propose a framework that includes four quadrants (i.e., out-of-domain, out-of-category, robust learning, and incremental learning) based on the dimensions of the data / target changes.
arXiv Detail & Related papers (2024-03-24T19:32:39Z) - On the Importance of Backbone to the Adversarial Robustness of Object
Detectors [26.712934402914854]
We argue that using adversarially pre-trained backbone networks is essential for enhancing the adversarial robustness of object detectors.
We propose a simple yet effective recipe for fast adversarial fine-tuning on object detectors with adversarially pre-trained backbones.
Our empirical results set a new milestone and deepen the understanding of adversarially robust object detection.
arXiv Detail & Related papers (2023-05-27T10:26:23Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Exploiting Multi-Object Relationships for Detecting Adversarial Attacks
in Complex Scenes [51.65308857232767]
Vision systems that deploy Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples.
Recent research has shown that checking the intrinsic consistencies in the input data is a promising way to detect adversarial attacks.
We develop a novel approach to perform context consistency checks using language models.
arXiv Detail & Related papers (2021-08-19T00:52:10Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems [14.976840260248913]
This paper presents three Targeted adversarial Objectness Gradient attacks to cause object-vanishing, object-fabrication, and object-mislabeling attacks.
We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks.
The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
arXiv Detail & Related papers (2020-04-09T01:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.