Evaluating adversarial robustness in simulated cerebellum
- URL: http://arxiv.org/abs/2012.02976v1
- Date: Sat, 5 Dec 2020 08:26:41 GMT
- Title: Evaluating adversarial robustness in simulated cerebellum
- Authors: Liu Yuezhang, Bo Li, Qifeng Chen
- Abstract summary: This paper will investigate the adversarial robustness in a simulated cerebellum.
To the best of our knowledge, this is the first attempt to examine the adversarial robustness in simulated cerebellum models.
- Score: 44.17544361412302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well known that artificial neural networks are vulnerable to
adversarial examples, in which great efforts have been made to improve the
robustness. However, such examples are usually imperceptible to humans, thus
their effect on biological neural circuits is largely unknown. This paper will
investigate the adversarial robustness in a simulated cerebellum, a
well-studied supervised learning system in computational neuroscience.
Specifically, we propose to study three unique characteristics revealed in the
cerebellum: (i) network width; (ii) long-term depression on the parallel
fiber-Purkinje cell synapses; (iii) sparse connectivity in the granule layer,
and hypothesize that they will be beneficial for improving robustness. To the
best of our knowledge, this is the first attempt to examine the adversarial
robustness in simulated cerebellum models. We wish to remark that both of the
positive and negative results are indeed meaningful -- if the answer is in the
affirmative, engineering insights are gained from the biological model into
designing more robust learning systems; otherwise, neuroscientists are
encouraged to fool the biological system in experiments with adversarial
attacks -- which makes the project especially suitable for a pre-registration
study.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Brain-inspired Computational Modeling of Action Recognition with Recurrent Spiking Neural Networks Equipped with Reinforcement Delay Learning [4.9798155883849935]
Action recognition has received significant attention due to its intricate nature and the brain's exceptional performance in this area.
Current solutions for action recognition either exhibit limitations in effectively addressing the problem or lack the necessary biological plausibility.
This article presents an effective brain-inspired computational model for action recognition.
arXiv Detail & Related papers (2024-06-17T17:34:16Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Neural Population Geometry Reveals the Role of Stochasticity in Robust
Perception [16.60105791126744]
We investigate how adversarial perturbations influence the internal representations of visual neural networks.
We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations.
Our results shed light on the strategies of robust perception networks, and help explain how geometricality may be beneficial to machine and biological computation.
arXiv Detail & Related papers (2021-11-12T22:59:45Z) - Learning to infer in recurrent biological networks [4.56877715768796]
We argue that the cortex may learn with an adversarial algorithm.
We illustrate the idea on recurrent neural networks trained to model image and video datasets.
arXiv Detail & Related papers (2020-06-18T19:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.