Bio-inspired Robustness: A Review
- URL: http://arxiv.org/abs/2103.09265v1
- Date: Tue, 16 Mar 2021 18:20:29 GMT
- Title: Bio-inspired Robustness: A Review
- Authors: Harshitha Machiraju, Oh-Hyeon Choung, Pascal Frossard, Michael. H
Herzog
- Abstract summary: Deep convolutional neural networks (DCNNs) have revolutionized computer vision and are often advocated as good models of the human visual system.
However, there are currently many shortcomings of DCNNs, which preclude them as a model of human vision.
- Score: 46.817006169430265
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep convolutional neural networks (DCNNs) have revolutionized computer
vision and are often advocated as good models of the human visual system.
However, there are currently many shortcomings of DCNNs, which preclude them as
a model of human vision. For example, in the case of adversarial attacks, where
adding small amounts of noise to an image, including an object, can lead to
strong misclassification of that object. But for humans, the noise is often
invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot
be taken as serious models of human vision. Many studies have tried to add
features of the human visual system to DCNNs to make them robust against
adversarial attacks. However, it is not fully clear whether human vision
inspired components increase robustness because performance evaluations of
these novel components in DCNNs are often inconclusive. We propose a set of
criteria for proper evaluation and analyze different models according to these
criteria. We finally sketch future efforts to make DCCNs one step closer to the
model of human vision.
Related papers
- Leveraging the Human Ventral Visual Stream to Improve Neural Network Robustness [8.419105840498917]
Human object recognition exhibits remarkable resilience in cluttered and dynamic visual environments.
Despite their unparalleled performance across numerous visual tasks, Deep Neural Networks (DNNs) remain far less robust than humans.
Here we show that DNNs, when guided by neural representations from a hierarchical sequence of regions in the human ventral visual stream, display increasing robustness to adversarial attacks.
arXiv Detail & Related papers (2024-05-04T04:33:20Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception? [8.370048099732573]
Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision.
We argue that it is important to distinguish between statistical tools and computational models.
We dispel a number of myths surrounding DNNs in vision science.
arXiv Detail & Related papers (2023-05-26T15:31:06Z) - Explainability and Robustness of Deep Visual Classification Models [14.975436239088312]
In the computer vision community, Convolutional Neural Networks (CNNs) have become the standard visual classification model.
As alternatives to CNNs, Capsule Networks (CapsNets) and Vision Transformers (ViTs) have been proposed.
CapsNets are considered to have more inductive bias than CNNs, whereas ViTs are considered to have less inductive bias than CNNs.
arXiv Detail & Related papers (2023-01-03T20:23:43Z) - Empirical Advocacy of Bio-inspired Models for Robust Image Recognition [39.37304194475199]
We provide a detailed analysis of such bio-inspired models and their properties.
We find that bio-inspired models tend to be adversarially robust without requiring any special data augmentation.
We also find that bio-inspired models tend to use both low and mid-frequency information, in contrast to other DCNN models.
arXiv Detail & Related papers (2022-05-18T16:19:26Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z) - Perceptual Adversarial Robustness: Defense Against Unseen Threat Models [58.47179090632039]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception.
Under the neural perceptual threat model, we develop novel perceptual adversarial attacks and defenses.
Because the NPTM is very broad, we find that Perceptual Adrial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks.
arXiv Detail & Related papers (2020-06-22T22:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.