Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness
- URL: http://arxiv.org/abs/2010.09624v2
- Date: Thu, 28 Jan 2021 17:47:48 GMT
- Title: Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness
- Authors: Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen
Moosavi-Dezfooli, Pascal Frossard
- Abstract summary: We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
- Score: 63.627760598441796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driven by massive amounts of data and important advances in computational
resources, new deep learning systems have achieved outstanding results in a
large spectrum of applications. Nevertheless, our current theoretical
understanding on the mathematical foundations of deep learning lags far behind
its empirical success. Towards solving the vulnerability of neural networks,
however, the field of adversarial robustness has recently become one of the
main sources of explanations of our deep models. In this article, we provide an
in-depth review of the field of adversarial robustness in deep learning, and
give a self-contained introduction to its main notions. But, in contrast to the
mainstream pessimistic perspective of adversarial robustness, we focus on the
main positive aspects that it entails. We highlight the intuitive connection
between adversarial examples and the geometry of deep neural networks, and
eventually explore how the geometric study of adversarial examples can serve as
a powerful tool to understand deep learning. Furthermore, we demonstrate the
broad applicability of adversarial robustness, providing an overview of the
main emerging applications of adversarial robustness beyond security. The goal
of this article is to provide readers with a set of new perspectives to
understand deep learning, and to supply them with intuitive tools and insights
on how to use adversarial robustness to improve it.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Rethinking Robust Contrastive Learning from the Adversarial Perspective [2.3333090554192615]
We find significant disparities between adversarial and clean representations in standard-trained networks.
adversarial training mitigates these disparities and fosters the convergence of representations toward a universal set.
arXiv Detail & Related papers (2023-02-05T22:43:50Z) - Deep Causal Learning: Representation, Discovery and Inference [2.696435860368848]
Causal learning reveals the essential relationships that underpin phenomena and delineates the mechanisms by which the world evolves.
Traditional causal learning methods face numerous challenges and limitations, including high-dimensional variables, unstructured variables, optimization problems, unobserved confounders, selection biases, and estimation inaccuracies.
Deep causal learning, which leverages deep neural networks, offers innovative insights and solutions for addressing these challenges.
arXiv Detail & Related papers (2022-11-07T09:00:33Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - A Survey of Robust Adversarial Training in Pattern Recognition:
Fundamental, Theory, and Methodologies [26.544748192629367]
Recent studies show that neural networks may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples.
Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to vast applications of neural networks.
To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream.
arXiv Detail & Related papers (2022-03-26T11:00:25Z) - Adversarial Robustness of Deep Learning: Theory, Algorithms, and
Applications [27.033174829788404]
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning.
We will highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs)
We will also introduce some effective countermeasures to improve the robustness of deep learning models.
arXiv Detail & Related papers (2021-08-24T00:08:33Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.