Automated analysis of eye-tracker-based human-human interaction studies
- URL: http://arxiv.org/abs/2007.04671v1
- Date: Thu, 9 Jul 2020 10:00:03 GMT
- Title: Automated analysis of eye-tracker-based human-human interaction studies
- Authors: Timothy Callemein, Kristof Van Beeck, Geert Br\^one, Toon Goedem\'e
- Abstract summary: We investigate which state-of-the-art computer vision algorithms may be used to automate the post-analysis of mobile eye-tracking data.
For the case study in this paper, we focus on mobile eye-tracker recordings made during human-human face-to-face interactions.
We show that the use of this single-pipeline framework provides robust results, which are both more accurate and faster than previous work in the field.
- Score: 2.433293618209319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile eye-tracking systems have been available for about a decade now and
are becoming increasingly popular in different fields of application, including
marketing, sociology, usability studies and linguistics. While the
user-friendliness and ergonomics of the hardware are developing at a rapid
pace, the software for the analysis of mobile eye-tracking data in some points
still lacks robustness and functionality. With this paper, we investigate which
state-of-the-art computer vision algorithms may be used to automate the
post-analysis of mobile eye-tracking data. For the case study in this paper, we
focus on mobile eye-tracker recordings made during human-human face-to-face
interactions. We compared two recent publicly available frameworks (YOLOv2 and
OpenPose) to relate the gaze location generated by the eye-tracker to the head
and hands visible in the scene camera data. In this paper we will show that the
use of this single-pipeline framework provides robust results, which are both
more accurate and faster than previous work in the field. Moreover, our
approach does not rely on manual interventions during this process.
Related papers
- I-MPN: Inductive Message Passing Network for Efficient Human-in-the-Loop Annotation of Mobile Eye Tracking Data [4.487146086221174]
We present a novel human-centered learning algorithm designed for automated object recognition within mobile eye-tracking settings.
Our approach seamlessly integrates an object detector with a spatial relation-aware inductive message-passing network (I-MPN), harnessing node profile information and capturing object correlations.
arXiv Detail & Related papers (2024-06-10T13:08:31Z) - Open Gaze: Open Source eye tracker for smartphone devices using Deep Learning [0.0]
We present an open-source implementation of a smartphone-based gaze tracker that emulates the methodology proposed by a GooglePaper.
Through the integration of machine learning techniques, we unveil an accurate eye tracking solution that is native to smartphones.
Our findings exhibit the inherent potential to amplify eye movement research by significant proportions.
arXiv Detail & Related papers (2023-08-25T17:10:22Z) - Tackling Face Verification Edge Cases: In-Depth Analysis and
Human-Machine Fusion Approach [5.574995936464475]
This paper investigates the effect of a combination of machine and human operators in the face verification task.
We conduct a study with 60 participants on selected tasks with humans and provide an extensive analysis.
We demonstrate that combining machine and human decisions can further improve the performance of state-of-the-art face verification systems.
arXiv Detail & Related papers (2023-04-17T10:29:26Z) - Literature Review: Computer Vision Applications in Transportation
Logistics and Warehousing [58.720142291102135]
Computer vision applications in transportation logistics and warehousing have a huge potential for process automation.
We present a structured literature review on research in the field to help leverage this potential.
arXiv Detail & Related papers (2023-04-12T17:33:41Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer
Learning for Telepresence Robotics [124.08684545010664]
Scene graph generation from images is a task of great interest to applications such as robotics.
We propose an initial approximation to a framework called Ontology-Guided Scene Graph Generation (OG-SGG)
arXiv Detail & Related papers (2022-02-21T13:23:15Z) - Facial Emotion Recognition using Deep Residual Networks in Real-World
Environments [5.834678345946704]
We propose a facial feature extractor model trained on an in-the-wild and massively collected video dataset.
The dataset consists of a million labelled frames and 2,616 thousand subjects.
As temporal information is important to the emotion recognition domain, we utilise LSTM cells to capture the temporal dynamics in the data.
arXiv Detail & Related papers (2021-11-04T10:08:22Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - MutualEyeContact: A conversation analysis tool with focus on eye contact [69.17395873398196]
MutualEyeContact can help scientists to understand the importance of (mutual) eye contact in social interactions.
We combine state-of-the-art eye tracking with face recognition based on machine learning and provide a tool for analysis and visualization of social interaction sessions.
arXiv Detail & Related papers (2021-07-09T15:05:53Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Towards Hardware-Agnostic Gaze-Trackers [0.5512295869673146]
We present a deep neural network architecture as an appearance-based method for constrained gaze-tracking.
Our system achieved an error of 1.8073cm on GazeCapture dataset without any calibration or device specific fine-tuning.
arXiv Detail & Related papers (2020-10-11T00:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.