Adversarially trained neural representations may already be as robust as
corresponding biological neural representations
- URL: http://arxiv.org/abs/2206.11228v1
- Date: Sun, 19 Jun 2022 04:15:29 GMT
- Title: Adversarially trained neural representations may already be as robust as
corresponding biological neural representations
- Authors: Chong Guo, Michael J. Lee, Guillaume Leclerc, Joel Dapello, Yug Rao,
Aleksander Madry, James J. DiCarlo
- Abstract summary: We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
- Score: 66.73634912993006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual systems of primates are the gold standard of robust perception. There
is thus a general belief that mimicking the neural representations that
underlie those systems will yield artificial visual systems that are
adversarially robust. In this work, we develop a method for performing
adversarial visual attacks directly on primate brain activity. We then leverage
this method to demonstrate that the above-mentioned belief might not be well
founded. Specifically, we report that the biological neurons that make up
visual systems of primates exhibit susceptibility to adversarial perturbations
that is comparable in magnitude to existing (robustly trained) artificial
neural networks.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - ReWaRD: Retinal Waves for Pre-Training Artificial Neural Networks
Mimicking Real Prenatal Development [5.222115919729418]
Pre- and post-natal retinal waves suggest to be a pre-training mechanism for the primate visual system.
We build a computational model that mimics this development mechanism by pre-training different artificial convolutional neural networks.
The resulting features of this biologically plausible pre-training closely match the V1 features of the primate visual system.
arXiv Detail & Related papers (2023-11-28T21:14:05Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - Evaluating adversarial robustness in simulated cerebellum [44.17544361412302]
This paper will investigate the adversarial robustness in a simulated cerebellum.
To the best of our knowledge, this is the first attempt to examine the adversarial robustness in simulated cerebellum models.
arXiv Detail & Related papers (2020-12-05T08:26:41Z) - An evolutionary perspective on the design of neuromorphic shape filters [0.0]
Cortical systems may be providing advanced image processing, but most likely are using design principles that had been proven effective in simpler systems.
The present article provides a brief overview of retinal and cortical mechanisms for registering shape information.
arXiv Detail & Related papers (2020-08-30T17:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.