A Comprehensive Study on the Robustness of Image Classification and
Object Detection in Remote Sensing: Surveying and Benchmarking
- URL: http://arxiv.org/abs/2306.12111v2
- Date: Fri, 15 Sep 2023 14:00:01 GMT
- Title: A Comprehensive Study on the Robustness of Image Classification and
Object Detection in Remote Sensing: Surveying and Benchmarking
- Authors: Shaohui Mei, Jiawei Lian, Xiaofei Wang, Yuru Su, Mingyang Ma, and
Lap-Pui Chau
- Abstract summary: Deep neural networks (DNNs) have found widespread applications in interpreting remote sensing (RS) imagery.
It has been demonstrated in previous works that DNNs are vulnerable to different types of noises, particularly adversarial noises.
This study represents the first comprehensive examination of both natural robustness and adversarial robustness in RS tasks.
- Score: 17.012502610423006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have found widespread applications in
interpreting remote sensing (RS) imagery. However, it has been demonstrated in
previous works that DNNs are vulnerable to different types of noises,
particularly adversarial noises. Surprisingly, there has been a lack of
comprehensive studies on the robustness of RS tasks, prompting us to undertake
a thorough survey and benchmark on the robustness of image classification and
object detection in RS. To our best knowledge, this study represents the first
comprehensive examination of both natural robustness and adversarial robustness
in RS tasks. Specifically, we have curated and made publicly available datasets
that contain natural and adversarial noises. These datasets serve as valuable
resources for evaluating the robustness of DNNs-based models. To provide a
comprehensive assessment of model robustness, we conducted meticulous
experiments with numerous different classifiers and detectors, encompassing a
wide range of mainstream methods. Through rigorous evaluation, we have
uncovered insightful and intriguing findings, which shed light on the
relationship between adversarial noise crafting and model training, yielding a
deeper understanding of the susceptibility and limitations of various models,
and providing guidance for the development of more resilient and robust models
Related papers
- Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - A Survey of Neural Network Robustness Assessment in Image Recognition [4.581878177334397]
In recent years, there has been significant attention given to the robustness assessment of neural networks.
Deep learning's robustness problem is particularly significant, highlighted by the discovery of adversarial attacks on image classification models.
In this survey, we present a detailed examination of both adversarial robustness (AR) and corruption robustness (CR) in neural network assessment.
arXiv Detail & Related papers (2024-04-12T07:19:16Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening [0.802904964931021]
We investigate the problems and challenges of evaluating the robustness of Differential Equation-based (DE) networks against synthetic distribution shifts.
We propose a novel and simple accuracy metric which can be used to evaluate intrinsic robustness and to validate dataset corruption simulators.
arXiv Detail & Related papers (2022-06-16T15:10:38Z) - Searching for Robust Neural Architectures via Comprehensive and Reliable
Evaluation [6.612134996737988]
We propose a novel framework, called Auto Adversarial Attack and Defense (AAAD), where we employ neural architecture search methods.
We consider four types of robustness evaluations, including adversarial noise, natural noise, system noise and quantified metrics.
The empirical results on the CIFAR10 dataset show that the searched efficient attack could help find more robust architectures.
arXiv Detail & Related papers (2022-03-07T04:45:05Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.