Class-Aware Robust Adversarial Training for Object Detection
- URL: http://arxiv.org/abs/2103.16148v2
- Date: Wed, 31 Mar 2021 02:40:24 GMT
- Title: Class-Aware Robust Adversarial Training for Object Detection
- Authors: Pin-Chun Chen, Bo-Han Kung, and Jun-Cheng Chen
- Abstract summary: We present a novel class-aware robust adversarial training paradigm for the object detection task.
For a given image, the proposed approach generates an universal adversarial perturbation to simultaneously attack all the occurred objects in the image.
The proposed approach decomposes the total loss into class-wise losses and normalizes each class loss using the number of objects for the class.
- Score: 12.600009462416663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection is an important computer vision task with plenty of
real-world applications; therefore, how to enhance its robustness against
adversarial attacks has emerged as a crucial issue. However, most of the
previous defense methods focused on the classification task and had few
analysis in the context of the object detection task. In this work, to address
the issue, we present a novel class-aware robust adversarial training paradigm
for the object detection task. For a given image, the proposed approach
generates an universal adversarial perturbation to simultaneously attack all
the occurred objects in the image through jointly maximizing the respective
loss for each object. Meanwhile, instead of normalizing the total loss with the
number of objects, the proposed approach decomposes the total loss into
class-wise losses and normalizes each class loss using the number of objects
for the class. The adversarial training based on the class weighted loss can
not only balances the influence of each class but also effectively and evenly
improves the adversarial robustness of trained models for all the object
classes as compared with the previous defense methods. Furthermore, with the
recent development of fast adversarial training, we provide a fast version of
the proposed algorithm which can be trained faster than the traditional
adversarial training while keeping comparable performance. With extensive
experiments on the challenging PASCAL-VOC and MS-COCO datasets, the evaluation
results demonstrate that the proposed defense methods can effectively enhance
the robustness of the object detection models.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation [12.995762461474856]
We introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the person'' category.
By adopting adversarial training, we construct a dynamically optimized ensemble model.
We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models.
arXiv Detail & Related papers (2023-12-28T08:58:13Z) - Outlier Robust Adversarial Training [57.06824365801612]
We introduce Outlier Robust Adversarial Training (ORAT) in this work.
ORAT is based on a bi-level optimization formulation of adversarial training with a robust rank-based loss function.
We show that the learning objective of ORAT satisfies the $mathcalH$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.
arXiv Detail & Related papers (2023-09-10T21:36:38Z) - FROD: Robust Object Detection for Free [1.8139771201780368]
State-of-the-art object detectors are susceptible to small adversarial perturbations.
We propose modifications to the classification-based backbone to instill robustness in object detection.
arXiv Detail & Related papers (2023-08-03T17:31:22Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Resolving Class Imbalance in Object Detection with Weighted Cross
Entropy Losses [0.0]
Object detection is an important task in computer vision which serves a lot of real-world applications such as autonomous driving, surveillance and robotics.
There are still limitations in performance of detectors when it comes to specialized datasets with uneven object class distributions.
We propose to explore and overcome such problem by application of several weighted variants of Cross Entropy loss.
arXiv Detail & Related papers (2020-06-02T06:36:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.