Visualizing and Understanding Vision System
- URL: http://arxiv.org/abs/2006.11413v1
- Date: Thu, 11 Jun 2020 07:08:49 GMT
- Title: Visualizing and Understanding Vision System
- Authors: Feng Qi, Guanjun Jiang
- Abstract summary: We use a vision recognition-reconstruction network (RRN) to investigate the development, recognition, learning and forgetting mechanisms.
In digit recognition study, we witness that the RRN could maintain object invariance representation under various viewing conditions.
In the learning and forgetting study, novel structure recognition is implemented by adjusting entire synapses in low magnitude while pattern specificities of original synaptic connectivity are preserved.
- Score: 0.6510507449705342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How the human vision system addresses the object identity-preserving
recognition problem is largely unknown. Here, we use a vision
recognition-reconstruction network (RRN) to investigate the development,
recognition, learning and forgetting mechanisms, and achieve similar
characteristics to electrophysiological measurements in monkeys. First, in
network development study, the RRN also experiences critical developmental
stages characterized by specificities in neuron types, synapse and activation
patterns, and visual task performance from the early stage of coarse salience
map recognition to mature stage of fine structure recognition. In digit
recognition study, we witness that the RRN could maintain object invariance
representation under various viewing conditions by coordinated adjustment of
responses of population neurons. And such concerted population responses
contained untangled object identity and properties information that could be
accurately extracted via high-level cortices or even a simple weighted
summation decoder. In the learning and forgetting study, novel structure
recognition is implemented by adjusting entire synapses in low magnitude while
pattern specificities of original synaptic connectivity are preserved, which
guaranteed a learning process without disrupting the existing functionalities.
This work benefits the understanding of the human visual processing mechanism
and the development of human-like machine intelligence.
Related papers
- Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex [12.1427193917406]
We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
arXiv Detail & Related papers (2023-09-26T15:38:26Z) - Multi-task Collaborative Pre-training and Individual-adaptive-tokens
Fine-tuning: A Unified Framework for Brain Representation Learning [3.1453938549636185]
We propose a unified framework that combines Collaborative pre-training and Individual--Tokens fine-tuning.
The proposed MCIAT achieves state-of-the-art diagnosis performance on the ADHD-200 dataset.
arXiv Detail & Related papers (2023-06-20T08:38:17Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Capturing the objects of vision with neural networks [0.0]
Human visual perception carves a scene at its physical joints, decomposing the world into objects.
Deep neural network (DNN) models of visual object recognition, by contrast, remain largely tethered to the sensory input.
We review related work in both fields and examine how these fields can help each other.
arXiv Detail & Related papers (2021-09-07T21:49:53Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Learning Intermediate Features of Object Affordances with a
Convolutional Neural Network [1.52292571922932]
We train a deep convolutional neural network (CNN) to recognize affordances from images and to learn the underlying features or the dimensionality of affordances.
We view this representational analysis as the first step towards a more formal account of how humans perceive and interact with the environment.
arXiv Detail & Related papers (2020-02-20T19:04:40Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.