Brain-like representational straightening of natural movies in robust
feedforward neural networks
- URL: http://arxiv.org/abs/2308.13870v1
- Date: Sat, 26 Aug 2023 13:04:36 GMT
- Title: Brain-like representational straightening of natural movies in robust
feedforward neural networks
- Authors: Tahereh Toosi and Elias B. Issa
- Abstract summary: Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies.
We show robustness to noise in the input image can produce representational straightening in feedforward neural networks.
- Score: 2.8749107965043286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representational straightening refers to a decrease in curvature of visual
feature representations of a sequence of frames taken from natural movies.
Prior work established straightening in neural representations of the primate
primary visual cortex (V1) and perceptual straightening in human behavior as a
hallmark of biological vision in contrast to artificial feedforward neural
networks which did not demonstrate this phenomenon as they were not explicitly
optimized to produce temporally predictable movie representations. Here, we
show robustness to noise in the input image can produce representational
straightening in feedforward neural networks. Both adversarial training (AT)
and base classifiers for Random Smoothing (RS) induced remarkably straightened
feature codes. Demonstrating their utility within the domain of natural movies,
these codes could be inverted to generate intervening movie frames by linear
interpolation in the feature space even though they were not trained on these
trajectories. Demonstrating their biological utility, we found that AT and RS
training improved predictions of neural data in primate V1 over baseline models
providing a parsimonious, bio-plausible mechanism -- noise in the sensory input
stages -- for generating representations in early visual cortex. Finally, we
compared the geometric properties of frame representations in these networks to
better understand how they produced representations that mimicked the
straightening phenomenon from biology. Overall, this work elucidating emergent
properties of robust neural networks demonstrates that it is not necessary to
utilize predictive objectives or train directly on natural movie statistics to
achieve models supporting straightened movie representations similar to human
perception that also predict V1 neural responses.
Related papers
- Learning predictable and robust neural representations by straightening image sequences [16.504807843249196]
We develop a self-supervised learning (SSL) objective that explicitly quantifies and promotes straightening.
We demonstrate the power of this objective in training deep feedforward neural networks on smoothly-rendered synthetic image sequences.
arXiv Detail & Related papers (2024-11-04T03:58:09Z) - Long-Range Feedback Spiking Network Captures Dynamic and Static Representations of the Visual Cortex under Movie Stimuli [25.454851828755054]
There is limited insight into how the visual cortex represents natural movie stimuli that contain context-rich information.
This work proposes the long-range feedback spiking network (LoRaFB-SNet), which mimics top-down connections between cortical regions.
We present Time-Series Representational Similarity Analysis (TSRSA) to measure the similarity between model representations and visual cortical representations of mice.
arXiv Detail & Related papers (2023-06-02T08:25:58Z) - A polar prediction model for learning to represent visual
transformations [10.857320773825357]
We propose a self-supervised representation-learning framework that exploits the regularities of natural videos to compute accurate predictions.
When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation.
Our framework offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction.
arXiv Detail & Related papers (2023-03-06T19:00:59Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.