Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II
- URL: http://arxiv.org/abs/2108.00401v1
- Date: Sun, 1 Aug 2021 08:54:47 GMT
- Title: Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II
- Authors: Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
- Abstract summary: Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
- Score: 86.51135909513047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) is the most widely used tool in the contemporary field of
computer vision. Its ability to accurately solve complex problems is employed
in vision research to learn deep neural models for a variety of tasks,
including security critical applications. However, it is now known that DL is
vulnerable to adversarial attacks that can manipulate its predictions by
introducing visually imperceptible perturbations in images and videos. Since
the discovery of this phenomenon in 2013~[1], it has attracted significant
attention of researchers from multiple sub-fields of machine intelligence. In
[2], we reviewed the contributions made by the computer vision community in
adversarial attacks on deep learning (and their defenses) until the advent of
year 2018. Many of those contributions have inspired new directions in this
area, which has matured significantly since witnessing the first generation
methods. Hence, as a legacy sequel of [2], this literature review focuses on
the advances in this area since 2018. To ensure authenticity, we mainly
consider peer-reviewed contributions published in the prestigious sources of
computer vision and machine learning research. Besides a comprehensive
literature review, the article also provides concise definitions of technical
terminologies for non-experts in this domain. Finally, this article discusses
challenges and future outlook of this direction based on the literature
reviewed herein and [2].
Related papers
- Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses [0.0]
This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
arXiv Detail & Related papers (2023-05-18T10:33:28Z) - Hyperbolic Deep Learning in Computer Vision: A Survey [20.811974050049365]
hyperbolic space has gained rapid traction for learning in computer vision.
We provide a categorization and in-depth overview of current literature on hyperbolic learning for computer vision.
We outline how hyperbolic learning is performed in all themes and discuss the main research problems that benefit from current advances in hyperbolic learning for computer vision.
arXiv Detail & Related papers (2023-05-11T07:14:23Z) - VQA and Visual Reasoning: An Overview of Recent Datasets, Methods and
Challenges [1.565870461096057]
The integration of vision and language has sparked a lot of attention as a result of this.
The tasks have been created in such a way that they properly exemplify the concepts of deep learning.
arXiv Detail & Related papers (2022-12-26T20:56:01Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Deep Learning to See: Towards New Foundations of Computer Vision [88.69805848302266]
This book criticizes the supposed scientific progress in the field of computer vision.
It proposes the investigation of vision within the framework of information-based laws of nature.
arXiv Detail & Related papers (2022-06-30T15:20:36Z) - Deep Learning for Visual Speech Analysis: A Survey [54.53032361204449]
This paper presents a review of recent progress in deep learning methods on visual speech analysis.
We cover different aspects of visual speech, including fundamental problems, challenges, benchmark datasets, a taxonomy of existing methods, and state-of-the-art performance.
arXiv Detail & Related papers (2022-05-22T14:44:53Z) - Deep Learning for Face Anti-Spoofing: A Survey [74.42603610773931]
Face anti-spoofing (FAS) has lately attracted increasing attention due to its vital role in securing face recognition systems from presentation attacks (PAs)
arXiv Detail & Related papers (2021-06-28T19:12:00Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.