Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements
- URL: http://arxiv.org/abs/2206.10587v1
- Date: Tue, 21 Jun 2022 17:59:23 GMT
- Title: Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements
- Authors: Leonard E. van Dyck, Sebastian J. Denzler, Walter R. Gruber
- Abstract summary: Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Convolutional Neural Networks (DCNNs) were originally inspired by
principles of biological vision, have evolved into best current computational
models of object recognition, and consequently indicate strong architectural
and functional parallelism with the ventral visual pathway throughout
comparisons with neuroimaging and neural time series data. As recent advances
in deep learning seem to decrease this similarity, computational neuroscience
is challenged to reverse-engineer the biological plausibility to obtain useful
models. While previous studies have shown that biologically inspired
architectures are able to amplify the human-likeness of the models, in this
study, we investigate a purely data-driven approach. We use human eye tracking
data to directly modify training examples and thereby guide the models' visual
attention during object recognition in natural images either towards or away
from the focus of human fixations. We compare and validate different
manipulation types (i.e., standard, human-like, and non-human-like attention)
through GradCAM saliency maps against human participant eye tracking data. Our
results demonstrate that the proposed guided focus manipulation works as
intended in the negative direction and non-human-like models focus on
significantly dissimilar image parts compared to humans. The observed effects
were highly category-specific, enhanced by animacy and face presence, developed
only after feedforward processing was completed, and indicated a strong
influence on face detection. With this approach, however, no significantly
increased human-likeness was found. Possible applications of overt visual
attention in DCNNs and further implications for theories of face detection are
discussed.
Related papers
- Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Comparing object recognition in humans and deep convolutional neural
networks -- An eye tracking study [7.222232547612573]
Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities.
We demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps.
A DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture.
arXiv Detail & Related papers (2021-07-30T23:32:05Z) - Passive attention in artificial neural networks predicts human visual
selectivity [8.50463394182796]
We show that passive attention techniques reveal a significant overlap with human visual selectivity estimates.
We validate these correlational results with causal manipulations using recognition experiments.
This work contributes a new approach to evaluating the biological and psychological validity of leading ANNs as models of human vision.
arXiv Detail & Related papers (2021-07-14T21:21:48Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.