Towards Adversarial Robustness of Deep Vision Algorithms
- URL: http://arxiv.org/abs/2211.10670v2
- Date: Tue, 22 Nov 2022 02:22:46 GMT
- Title: Towards Adversarial Robustness of Deep Vision Algorithms
- Authors: Hanshu Yan
- Abstract summary: This talk focuses on the adversarial robustness of image classification models and image denoisers.
Deep learning methods have achieved great success in solving computer vision tasks.
- Score: 6.09170287691728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods have achieved great success in solving computer vision
tasks, and they have been widely utilized in artificially intelligent systems
for image processing, analysis, and understanding. However, deep neural
networks have been shown to be vulnerable to adversarial perturbations in input
data. The security issues of deep neural networks have thus come to the fore.
It is imperative to study the adversarial robustness of deep vision algorithms
comprehensively. This talk focuses on the adversarial robustness of image
classification models and image denoisers. We will discuss the robustness of
deep vision algorithms from three perspectives: 1) robustness evaluation (we
propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness
improvement (HAT, TisODE, and CIFS are developed to robustify vision models),
and 3) the connection between adversarial robustness and generalization
capability to new domains (we find that adversarially robust denoisers can deal
with unseen types of real-world noise).
Related papers
- A Survey of Neural Network Robustness Assessment in Image Recognition [4.581878177334397]
In recent years, there has been significant attention given to the robustness assessment of neural networks.
Deep learning's robustness problem is particularly significant, highlighted by the discovery of adversarial attacks on image classification models.
In this survey, we present a detailed examination of both adversarial robustness (AR) and corruption robustness (CR) in neural network assessment.
arXiv Detail & Related papers (2024-04-12T07:19:16Z) - On Inherent Adversarial Robustness of Active Vision Systems [7.803487547944363]
We show that two active vision methods - GFNet and FALcon - achieve (2-3) times greater robustness compared to a standard passive convolutional network under state-of-the-art adversarial attacks.
More importantly, we provide illustrative and interpretable visualization analysis that demonstrates how performing inference from distinct fixation points makes active vision methods less vulnerable to malicious inputs.
arXiv Detail & Related papers (2024-03-29T22:51:45Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Attacks Assessment of Salient Object Detection via Symbolic
Learning [4.613806493425003]
Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence.
This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems.
We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness.
arXiv Detail & Related papers (2023-09-12T01:03:43Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial
Defense [52.66971714830943]
Masked image modeling (MIM) has made it a prevailing framework for self-supervised visual representation learning.
In this paper, we investigate how this powerful self-supervised learning paradigm can provide adversarial robustness to downstream classifiers.
We propose an adversarial defense method, referred to as De3, by exploiting the pretrained decoder for denoising.
arXiv Detail & Related papers (2023-02-02T12:37:24Z) - On the Adversarial Robustness of Camera-based 3D Object Detection [21.091078268929667]
We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
arXiv Detail & Related papers (2023-01-25T18:59:15Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.