Adversarially-Aware Robust Object Detector
- URL: http://arxiv.org/abs/2207.06202v2
- Date: Thu, 14 Jul 2022 06:38:54 GMT
- Title: Adversarially-Aware Robust Object Detector
- Authors: Ziyi Dong, Pengxu Wei, Liang Lin
- Abstract summary: We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
- Score: 85.10894272034135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection, as a fundamental computer vision task, has achieved a
remarkable progress with the emergence of deep neural networks. Nevertheless,
few works explore the adversarial robustness of object detectors to resist
adversarial attacks for practical applications in various real-world scenarios.
Detectors have been greatly challenged by unnoticeable perturbation, with sharp
performance drop on clean images and extremely poor performance on adversarial
images. In this work, we empirically explore the model training for adversarial
robustness in object detection, which greatly attributes to the conflict
between learning clean images and adversarial images. To mitigate this issue,
we propose a Robust Detector (RobustDet) based on adversarially-aware
convolution to disentangle gradients for model learning on clean and
adversarial images. RobustDet also employs the Adversarial Image Discriminator
(AID) and Consistent Features with Reconstruction (CFR) to ensure a reliable
robustness. Extensive experiments on PASCAL VOC and MS-COCO demonstrate that
our model effectively disentangles gradients and significantly enhances the
detection robustness with maintaining the detection ability on clean images.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - FriendNet: Detection-Friendly Dehazing Network [24.372610892854283]
We propose an effective architecture that bridges image dehazing and object detection together via guidance information and task-driven learning.
FriendNet aims to deliver both high-quality perception and high detection capacity.
arXiv Detail & Related papers (2024-03-07T12:19:04Z) - Exploring Robust Features for Improving Adversarial Robustness [11.935612873688122]
We explore the robust features which are not affected by the adversarial perturbations to improve the model's adversarial robustness.
Specifically, we propose a feature disentanglement model to segregate the robust features from non-robust features and domain specific features.
The trained domain discriminator is able to identify the domain specific features from the clean images and adversarial examples almost perfectly.
arXiv Detail & Related papers (2023-09-09T00:30:04Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - FROD: Robust Object Detection for Free [1.8139771201780368]
State-of-the-art object detectors are susceptible to small adversarial perturbations.
We propose modifications to the classification-based backbone to instill robustness in object detection.
arXiv Detail & Related papers (2023-08-03T17:31:22Z) - On the Importance of Backbone to the Adversarial Robustness of Object
Detectors [26.712934402914854]
We argue that using adversarially pre-trained backbone networks is essential for enhancing the adversarial robustness of object detectors.
We propose a simple yet effective recipe for fast adversarial fine-tuning on object detectors with adversarially pre-trained backbones.
Our empirical results set a new milestone and deepen the understanding of adversarially robust object detection.
arXiv Detail & Related papers (2023-05-27T10:26:23Z) - On the Adversarial Robustness of Camera-based 3D Object Detection [21.091078268929667]
We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
arXiv Detail & Related papers (2023-01-25T18:59:15Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.