TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems
- URL: http://arxiv.org/abs/2004.04320v1
- Date: Thu, 9 Apr 2020 01:36:23 GMT
- Title: TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems
- Authors: Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei,
Yanzhao Wu
- Abstract summary: This paper presents three Targeted adversarial Objectness Gradient attacks to cause object-vanishing, object-fabrication, and object-mislabeling attacks.
We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks.
The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
- Score: 14.976840260248913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid growth of real-time huge data capturing has pushed the deep
learning and data analytic computing to the edge systems. Real-time object
recognition on the edge is one of the representative deep neural network (DNN)
powered edge systems for real-world mission-critical applications, such as
autonomous driving and augmented reality. While DNN powered object detection
edge systems celebrate many life-enriching opportunities, they also open doors
for misuse and abuse. This paper presents three Targeted adversarial Objectness
Gradient attacks, coined as TOG, which can cause the state-of-the-art deep
object detection networks to suffer from object-vanishing, object-fabrication,
and object-mislabeling attacks. We also present a universal objectness gradient
attack to use adversarial transferability for black-box attacks, which is
effective on any inputs with negligible attack time cost, low human
perceptibility, and particularly detrimental to object detection edge systems.
We report our experimental measurements using two benchmark datasets (PASCAL
VOC and MS COCO) on two state-of-the-art detection algorithms (YOLO and SSD).
The results demonstrate serious adversarial vulnerabilities and the compelling
need for developing robust object detection systems.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Run-time Monitoring of 3D Object Detection in Automated Driving Systems Using Early Layer Neural Activation Patterns [12.384452095533396]
Integrity monitoring of automated driving systems (ADS) is paramount for ensuring safety.
Recent advancements in deep neural network (DNN)-based object detectors, their susceptibility to detection errors remains a significant concern.
arXiv Detail & Related papers (2024-04-11T12:24:47Z) - Run-time Introspection of 2D Object Detection in Automated Driving
Systems Using Learning Representations [13.529124221397822]
We introduce a novel introspection solution for 2D object detection based on Deep Neural Networks (DNNs)
We implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets.
Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
arXiv Detail & Related papers (2024-03-02T10:56:14Z) - Identifying Out-of-Distribution Samples in Real-Time for Safety-Critical
2D Object Detection with Margin Entropy Loss [0.0]
We present an approach to enable OOD detection for 2D object detection by employing the margin entropy (ME) loss.
A CNN trained with the ME loss significantly outperforms OOD detection using standard confidence scores.
arXiv Detail & Related papers (2022-09-01T11:14:57Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving
Scenarios [3.236217153362305]
We present an effective attack strategy aiming the objectness aspect of visual detection in autonomous vehicles.
Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses.
The proposed adversarial defense approach can improve the detectors' robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO_traffic, respectively.
arXiv Detail & Related papers (2022-02-10T00:47:36Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z) - Progressive Object Transfer Detection [84.48927705173494]
We propose a novel Progressive Object Transfer Detection (POTD) framework.
First, POTD can leverage various object supervision of different domains effectively into a progressive detection procedure.
Second, POTD consists of two delicate transfer stages, i.e., Low-Shot Transfer Detection (LSTD), and Weakly-Supervised Transfer Detection (WSTD)
arXiv Detail & Related papers (2020-02-12T00:16:24Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.