Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum
- URL: http://arxiv.org/abs/2405.00679v1
- Date: Mon, 5 Feb 2024 12:06:00 GMT
- Title: Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum
- Authors: Konstantin Holzhausen, Mia Merlid, Håkon Olav Torvik, Anders Malthe-Sørenssen, Mikkel Elle Lepperød,
- Abstract summary: We study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization.
Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Backpropagation-optimized artificial neural networks, while precise, lack robustness, leading to unforeseen behaviors that affect their safety. Biological neural systems do solve some of these issues already. Thus, understanding the biological mechanisms of robustness is an important step towards building trustworthy and safe systems. Unlike artificial models, biological neurons adjust connectivity based on neighboring cell activity. Robustness in neural representations is hypothesized to correlate with the smoothness of the encoding manifold. Recent work suggests power law covariance spectra, which were observed studying the primary visual cortex of mice, to be indicative of a balanced trade-off between accuracy and robustness in representations. Here, we show that unsupervised local learning models with winner takes all dynamics learn such power law representations, providing upcoming studies a mechanistic model with that characteristic. Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations. Hence, we study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization while assessing performance and adversarial robustness. Our work serves as a foundation for future research into the mechanisms underlying power law spectra and optimally smooth encodings in both biological and artificial systems. The insights gained may elucidate the mechanisms that realize robust neural networks in mammalian brains and inform the development of more stable and reliable artificial systems.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? [0.0]
We study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods.
We show how NTKs allow to generate adversarial examples in a training-free'' fashion, and demonstrate that they transfer to fool their finite-width neural net counterparts in the lazy'' regime.
arXiv Detail & Related papers (2022-10-11T16:11:48Z) - On the visual analytic intelligence of neural networks [0.463732827131233]
We present a biologically realistic system that receives inputs from synthetic eye movements - saccades, and processes them with neurons incorporating dynamics of neocortical neurons.
We show that the biologically inspired network achieves superior accuracy, learns faster and requires fewer parameters than the conventional network.
arXiv Detail & Related papers (2022-09-28T11:50:29Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Neural Population Geometry Reveals the Role of Stochasticity in Robust
Perception [16.60105791126744]
We investigate how adversarial perturbations influence the internal representations of visual neural networks.
We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations.
Our results shed light on the strategies of robust perception networks, and help explain how geometricality may be beneficial to machine and biological computation.
arXiv Detail & Related papers (2021-11-12T22:59:45Z) - Optimal input representation in neural systems at the edge of chaos [0.0]
We build an artificial neural network and train it to classify images.
We find that the best performance in such a task is obtained when the network operates near the critical point.
We conclude that operating near criticality can have -- besides the usually alleged virtues -- the advantage of allowing for flexible, robust and efficient input representations.
arXiv Detail & Related papers (2021-07-12T19:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.