Neural Population Geometry Reveals the Role of Stochasticity in Robust
Perception
- URL: http://arxiv.org/abs/2111.06979v1
- Date: Fri, 12 Nov 2021 22:59:45 GMT
- Title: Neural Population Geometry Reveals the Role of Stochasticity in Robust
Perception
- Authors: Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox,
Josh H. McDermott, James J. DiCarlo, SueYeon Chung
- Abstract summary: We investigate how adversarial perturbations influence the internal representations of visual neural networks.
We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations.
Our results shed light on the strategies of robust perception networks, and help explain how geometricality may be beneficial to machine and biological computation.
- Score: 16.60105791126744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are often cited by neuroscientists and machine learning
researchers as an example of how computational models diverge from biological
sensory systems. Recent work has proposed adding biologically-inspired
components to visual neural networks as a way to improve their adversarial
robustness. One surprisingly effective component for reducing adversarial
vulnerability is response stochasticity, like that exhibited by biological
neurons. Here, using recently developed geometrical techniques from
computational neuroscience, we investigate how adversarial perturbations
influence the internal representations of standard, adversarially trained, and
biologically-inspired stochastic networks. We find distinct geometric
signatures for each type of network, revealing different mechanisms for
achieving robust representations. Next, we generalize these results to the
auditory domain, showing that neural stochasticity also makes auditory models
more robust to adversarial perturbations. Geometric analysis of the stochastic
networks reveals overlap between representations of clean and adversarially
perturbed stimuli, and quantitatively demonstrates that competing geometric
effects of stochasticity mediate a tradeoff between adversarial and clean
performance. Our results shed light on the strategies of robust perception
utilized by adversarially trained and stochastic networks, and help explain how
stochasticity may be beneficial to machine and biological computation.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters.
We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only.
Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum [0.0]
We study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization.
Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations.
arXiv Detail & Related papers (2024-02-05T12:06:00Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Relationship between manifold smoothness and adversarial vulnerability
in deep learning with local errors [2.7834038784275403]
We study the origin of the adversarial vulnerability in artificial neural networks.
Our study reveals that a high generalization accuracy requires a relatively fast power-law decay of the eigen-spectrum of hidden representations.
arXiv Detail & Related papers (2020-07-04T08:47:51Z) - Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network
Framework for Sentiment Analysis [1.0312968200748118]
We propose a neural network architecture that combines predictions of different models on the same text to construct robust, accurate and computationally efficient classifiers for sentiment analysis.
Among them, we propose a systematic new approach to combining multiple predictions based on a dedicated neural network and develop mathematical analysis of it along with state-of-the-art experimental results.
arXiv Detail & Related papers (2020-06-23T12:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.