GazeMAE: General Representations of Eye Movements using a Micro-Macro
Autoencoder
- URL: http://arxiv.org/abs/2009.02437v2
- Date: Sun, 25 Oct 2020 06:02:31 GMT
- Title: GazeMAE: General Representations of Eye Movements using a Micro-Macro
Autoencoder
- Authors: Louise Gillian C. Bautista and Prospero C. Naval Jr
- Abstract summary: We propose an abstract representation of eye movements that preserve the important nuances in gaze behavior while being stimuli-agnostic.
We consider eye movements as raw position and velocity signals and train separate deep temporal convolutional autoencoders.
The autoencoders learn micro-scale and macro-scale representations that correspond to the fast and slow features of eye movements.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye movements are intricate and dynamic events that contain a wealth of
information about the subject and the stimuli. We propose an abstract
representation of eye movements that preserve the important nuances in gaze
behavior while being stimuli-agnostic. We consider eye movements as raw
position and velocity signals and train separate deep temporal convolutional
autoencoders. The autoencoders learn micro-scale and macro-scale
representations that correspond to the fast and slow features of eye movements.
We evaluate the joint representations with a linear classifier fitted on
various classification tasks. Our work accurately discriminates between gender
and age groups, and outperforms previous works on biometrics and stimuli
clasification. Further experiments highlight the validity and generalizability
of this method, bringing eye tracking research closer to real-world
applications.
Related papers
- A Framework for Pupil Tracking with Event Cameras [1.708806485130162]
Saccades are extremely rapid movements of both eyes that occur simultaneously.
The peak angular speed of the eye during a saccade can reach as high as 700deg/s in humans.
We present events as frames that can be readily utilized by standard deep learning algorithms.
arXiv Detail & Related papers (2024-07-23T17:32:02Z) - Focused State Recognition Using EEG with Eye Movement-Assisted Annotation [4.705434077981147]
Deep learning models for learning EEG and eye movement features proves effective in classifying brain activities.
A focused state indicates intense concentration on a task or thought. Distinguishing focused and unfocused states can be achieved through eye movement behaviors.
arXiv Detail & Related papers (2024-06-15T14:06:00Z) - Priority-Centric Human Motion Generation in Discrete Latent Space [59.401128190423535]
We introduce a Priority-Centric Motion Discrete Diffusion Model (M2DM) for text-to-motion generation.
M2DM incorporates a global self-attention mechanism and a regularization term to counteract code collapse.
We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token.
arXiv Detail & Related papers (2023-08-28T10:40:16Z) - Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task [2.092312847886424]
We build deep generative models of eye movements using a novel differentiable architecture for gaze fixations and gaze shifts.
We find that human eye movements are best predicted by a model that is optimized not to perform the task as efficiently as possible but instead to run an internal simulation of an object traversing the maze.
arXiv Detail & Related papers (2022-12-20T15:48:48Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv Detail & Related papers (2022-06-17T10:17:24Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation [12.49914980193329]
We study the inference problem of identifying the activation of specific fingers from a sequence of US images.
We consider this task as an important step towards higher adoption rates of robotic prostheses among arm amputees.
arXiv Detail & Related papers (2022-02-10T18:05:09Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - CLRGaze: Contrastive Learning of Representations for Eye Movement
Signals [0.0]
We learn feature vectors of eye movements in a self-supervised manner.
We adopt a contrastive learning approach and propose a set of data transformations that encourage a deep neural network to discern salient and granular gaze patterns.
arXiv Detail & Related papers (2020-10-25T06:12:06Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.