Object Based Attention Through Internal Gating
- URL: http://arxiv.org/abs/2106.04540v1
- Date: Tue, 8 Jun 2021 17:20:50 GMT
- Title: Object Based Attention Through Internal Gating
- Authors: Jordan Lei, Ari S. Benjamin, Konrad P. Kording
- Abstract summary: We propose an artificial neural network model of object-based attention.
Our model captures the way in which attention is both top-down and recurrent.
We find that our model replicates a range of findings from neuroscience.
- Score: 4.941630596191806
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Object-based attention is a key component of the visual system, relevant for
perception, learning, and memory. Neurons tuned to features of attended objects
tend to be more active than those associated with non-attended objects. There
is a rich set of models of this phenomenon in computational neuroscience.
However, there is currently a divide between models that successfully match
physiological data but can only deal with extremely simple problems and models
of attention used in computer vision. For example, attention in the brain is
known to depend on top-down processing, whereas self-attention in deep learning
does not. Here, we propose an artificial neural network model of object-based
attention that captures the way in which attention is both top-down and
recurrent. Our attention model works well both on simple test stimuli, such as
those using images of handwritten digits, and on more complex stimuli, such as
natural images drawn from the COCO dataset. We find that our model replicates a
range of findings from neuroscience, including attention-invariant tuning,
inhibition of return, and attention-mediated scaling of activity. Understanding
object based attention is both computationally interesting and a key problem
for computational neuroscience.
Related papers
- Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - Parallel Backpropagation for Shared-Feature Visualization [36.31730251757713]
Recent work has shown that some out-of-category stimuli also activate neurons in high-level visual brain regions.
This may be due to visual features common among the preferred class also being present in other images.
Here, we propose a deep-learning-based approach for visualizing these features.
arXiv Detail & Related papers (2024-05-16T05:56:03Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Bi-directional Object-context Prioritization Learning for Saliency
Ranking [60.62461793691836]
Existing approaches focus on learning either object-object or object-scene relations.
We observe that spatial attention works concurrently with object-based attention in the human visual recognition system.
We propose a novel bi-directional method to unify spatial attention and object-based attention for saliency ranking.
arXiv Detail & Related papers (2022-03-17T16:16:03Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neural encoding with visual attention [17.020869686284165]
We propose a novel approach to neural encoding by including a trainable soft-attention module.
We find that attention locations estimated by the model on independent data agree well with the corresponding eye fixation patterns.
arXiv Detail & Related papers (2020-10-01T16:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.