Automatic selection of eye tracking variables in visual categorization
in adults and infants
- URL: http://arxiv.org/abs/2010.15047v2
- Date: Thu, 26 Nov 2020 14:56:32 GMT
- Title: Automatic selection of eye tracking variables in visual categorization
in adults and infants
- Authors: Samuel Rivera, Catherine A. Best, Hyungwook Yim, Dirk B. Walther,
Vladimir M. Sloutsky, Aleix M. Martinez
- Abstract summary: We propose an automated method for selecting eye tracking variables based on analyses of their usefulness to discriminate learners from non-learners of visual categories.
We found remarkable agreement between these methods in identifying a small set of discriminant variables.
- Score: 0.4194295877935867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual categorization and learning of visual categories exhibit early onset,
however the underlying mechanisms of early categorization are not well
understood. The main limiting factor for examining these mechanisms is the
limited duration of infant cooperation (10-15 minutes), which leaves little
room for multiple test trials. With its tight link to visual attention, eye
tracking is a promising method for getting access to the mechanisms of category
learning. But how should researchers decide which aspects of the rich eye
tracking data to focus on? To date, eye tracking variables are generally
handpicked, which may lead to biases in the eye tracking data. Here, we propose
an automated method for selecting eye tracking variables based on analyses of
their usefulness to discriminate learners from non-learners of visual
categories. We presented infants and adults with a category learning task and
tracked their eye movements. We then extracted an over-complete set of eye
tracking variables encompassing durations, probabilities, latencies, and the
order of fixations and saccadic eye movements. We compared three statistical
techniques for identifying those variables among this large set that are useful
for discriminating learners form non-learners: ANOVA ranking, Bayes ranking,
and L1 regularized logistic regression. We found remarkable agreement between
these methods in identifying a small set of discriminant variables. Moreover,
the same eye tracking variables allow us to classify category learners from
non-learners among adults and 6- to 8-month-old infants with accuracies above
71%.
Related papers
- Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - A temporally quantized distribution of pupil diameters as a new feature
for cognitive load classification [1.4469849628263638]
We present a new feature that can be used to classify cognitive load based on pupil information.
The applications of determining Cognitive Load from pupil data are numerous and could lead to pre-warning systems for burnouts.
arXiv Detail & Related papers (2023-03-03T07:52:16Z) - Few-Shot Meta Learning for Recognizing Facial Phenotypes of Genetic
Disorders [55.41644538483948]
Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible.
Previous work has addressed the problem as a classification problem and used deep learning methods.
In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition.
arXiv Detail & Related papers (2022-10-23T11:52:57Z) - Visual Knowledge Tracing [26.446317829793454]
We propose a novel task of tracing the evolving classification behavior of human learners.
We propose models that jointly extract the visual features used by learners as well as predicting the classification functions they utilize.
Our results show that our recurrent models are able to predict the classification behavior of human learners on three challenging medical image and species identification tasks.
arXiv Detail & Related papers (2022-07-20T19:24:57Z) - SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot
Learning [85.2093650907943]
We propose SEmantic Guided Attention (SEGA) to teach machines to recognize a new category.
SEGA uses semantic knowledge to guide the visual perception in a top-down manner about what visual features should be paid attention to.
We show that our semantic guided attention realizes anticipated function and outperforms state-of-the-art results.
arXiv Detail & Related papers (2021-11-08T08:03:44Z) - Passive attention in artificial neural networks predicts human visual
selectivity [8.50463394182796]
We show that passive attention techniques reveal a significant overlap with human visual selectivity estimates.
We validate these correlational results with causal manipulations using recognition experiments.
This work contributes a new approach to evaluating the biological and psychological validity of leading ANNs as models of human vision.
arXiv Detail & Related papers (2021-07-14T21:21:48Z) - CLRGaze: Contrastive Learning of Representations for Eye Movement
Signals [0.0]
We learn feature vectors of eye movements in a self-supervised manner.
We adopt a contrastive learning approach and propose a set of data transformations that encourage a deep neural network to discern salient and granular gaze patterns.
arXiv Detail & Related papers (2020-10-25T06:12:06Z) - Classifying Eye-Tracking Data Using Saliency Maps [8.524684315458245]
This paper proposes a visual saliency based novel feature extraction method for automatic and quantitative classification of eye-tracking data.
Comparing the saliency amplitudes, similarity and dissimilarity of saliency maps with the corresponding eye fixations maps gives an extra dimension of information which is effectively utilized to generate discriminative features to classify the eye-tracking data.
arXiv Detail & Related papers (2020-10-24T15:18:07Z) - Dataset Bias in Few-shot Image Recognition [57.25445414402398]
We first investigate the impact of transferable capabilities learned from base categories.
Second, we investigate performance differences on different datasets from dataset structures and different few-shot learning methods.
arXiv Detail & Related papers (2020-08-18T14:46:23Z) - 1-D Convlutional Neural Networks for the Analysis of Pupil Size
Variations in Scotopic Conditions [79.71065005161566]
1-D convolutional neural network models are trained for classification of short-range sequences.
Model provides prediction with high average accuracy on a hold out test set.
arXiv Detail & Related papers (2020-02-06T17:25:37Z) - End-to-End Models for the Analysis of System 1 and System 2 Interactions
based on Eye-Tracking Data [99.00520068425759]
We propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events.
A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios.
We show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy.
arXiv Detail & Related papers (2020-02-03T17:46:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.