Classifying Eye-Tracking Data Using Saliency Maps
- URL: http://arxiv.org/abs/2010.12913v1
- Date: Sat, 24 Oct 2020 15:18:07 GMT
- Title: Classifying Eye-Tracking Data Using Saliency Maps
- Authors: Shafin Rahman, Sejuti Rahman, Omar Shahid, Md. Tahmeed Abdullah,
Jubair Ahmed Sourov
- Abstract summary: This paper proposes a visual saliency based novel feature extraction method for automatic and quantitative classification of eye-tracking data.
Comparing the saliency amplitudes, similarity and dissimilarity of saliency maps with the corresponding eye fixations maps gives an extra dimension of information which is effectively utilized to generate discriminative features to classify the eye-tracking data.
- Score: 8.524684315458245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A plethora of research in the literature shows how human eye fixation pattern
varies depending on different factors, including genetics, age, social
functioning, cognitive functioning, and so on. Analysis of these variations in
visual attention has already elicited two potential research avenues: 1)
determining the physiological or psychological state of the subject and 2)
predicting the tasks associated with the act of viewing from the recorded
eye-fixation data. To this end, this paper proposes a visual saliency based
novel feature extraction method for automatic and quantitative classification
of eye-tracking data, which is applicable to both of the research directions.
Instead of directly extracting features from the fixation data, this method
employs several well-known computational models of visual attention to predict
eye fixation locations as saliency maps. Comparing the saliency amplitudes,
similarity and dissimilarity of saliency maps with the corresponding eye
fixations maps gives an extra dimension of information which is effectively
utilized to generate discriminative features to classify the eye-tracking data.
Extensive experimentation using Saliency4ASD, Age Prediction, and Visual
Perceptual Task dataset show that our saliency-based feature can achieve
superior performance, outperforming the previous state-of-the-art methods by a
considerable margin. Moreover, unlike the existing application-specific
solutions, our method demonstrates performance improvement across three
distinct problems from the real-life domain: Autism Spectrum Disorder
screening, toddler age prediction, and human visual perceptual task
classification, providing a general paradigm that utilizes the
extra-information inherent in saliency maps for a more accurate classification.
Related papers
- Semantic-Based Active Perception for Humanoid Visual Tasks with Foveal Sensors [49.99728312519117]
The aim of this work is to establish how accurately a recent semantic-based active perception model is able to complete visual tasks that are regularly performed by humans.
This model exploits the ability of current object detectors to localize and classify a large number of object classes and to update a semantic description of a scene across multiple fixations.
In the task of scene exploration, the semantic-based method demonstrates superior performance compared to the traditional saliency-based model.
arXiv Detail & Related papers (2024-04-16T18:15:57Z) - Discrimination of Radiologists Utilizing Eye-Tracking Technology and
Machine Learning: A Case Study [0.9142067094647588]
This study presents a novel discretized feature encoding based on binning fixation data for efficient geometric alignment.
The encoded features of the eye-fixation data are employed by machine learning classifiers to discriminate between faculty and trainee radiologists.
arXiv Detail & Related papers (2023-08-04T23:51:47Z) - Learning to search for and detect objects in foveal images using deep
learning [3.655021726150368]
This study employs a fixation prediction model that emulates human objective-guided attention of searching for a given class in an image.
The foveated pictures at each fixation point are then classified to determine whether the target is present or absent in the scene.
We present a novel dual task model capable of performing fixation prediction and detection simultaneously, allowing knowledge transfer between the two tasks.
arXiv Detail & Related papers (2023-04-12T09:50:25Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Facial Anatomical Landmark Detection using Regularized Transfer Learning
with Application to Fetal Alcohol Syndrome Recognition [24.27777060287004]
Fetal alcohol syndrome (FAS) caused by prenatal alcohol exposure can result in a series of cranio-facial anomalies.
Anatomical landmark detection is important to detect the presence of FAS associated facial anomalies.
Current deep learning-based heatmap regression methods designed for facial landmark detection in natural images assume availability of large datasets.
We develop a new regularized transfer learning approach that exploits the knowledge of a network learned on large facial recognition datasets.
arXiv Detail & Related papers (2021-09-12T11:05:06Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Understanding Character Recognition using Visual Explanations Derived
from the Human Visual System and Deep Networks [6.734853055176694]
We examine the congruence, or lack thereof, in the information-gathering strategies of deep neural networks.
The deep learning model considered similar regions in character, which humans have fixated in the case of correctly classified characters.
We propose to use the visual fixation maps obtained from the eye-tracking experiment as a supervisory input to align the model's focus on relevant character regions.
arXiv Detail & Related papers (2021-08-10T10:09:37Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.