MutualEyeContact: A conversation analysis tool with focus on eye contact
- URL: http://arxiv.org/abs/2107.04476v1
- Date: Fri, 9 Jul 2021 15:05:53 GMT
- Title: MutualEyeContact: A conversation analysis tool with focus on eye contact
- Authors: Alexander Sch\"afer, Tomoko Isomura, Gerd Reis, Katsumi Watanabe,
Didier Stricker
- Abstract summary: MutualEyeContact can help scientists to understand the importance of (mutual) eye contact in social interactions.
We combine state-of-the-art eye tracking with face recognition based on machine learning and provide a tool for analysis and visualization of social interaction sessions.
- Score: 69.17395873398196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye contact between individuals is particularly important for understanding
human behaviour. To further investigate the importance of eye contact in social
interactions, portable eye tracking technology seems to be a natural choice.
However, the analysis of available data can become quite complex. Scientists
need data that is calculated quickly and accurately. Additionally, the relevant
data must be automatically separated to save time. In this work, we propose a
tool called MutualEyeContact which excels in those tasks and can help
scientists to understand the importance of (mutual) eye contact in social
interactions. We combine state-of-the-art eye tracking with face recognition
based on machine learning and provide a tool for analysis and visualization of
social interaction sessions. This work is a joint collaboration of computer
scientists and cognitive scientists. It combines the fields of social and
behavioural science with computer vision and deep learning.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Multi-person eye tracking for real-world scene perception in social settings [34.82692226532414]
We apply mobile eye tracking in a real-world multi-person setup and develop a system to stream, record, and analyse synchronised data.
Our system achieves precise time synchronisation and accurate gaze projection in challenging dynamic scenes.
This advancement enables insight into collaborative behaviour, group dynamics, and social interaction, with high ecological validity.
arXiv Detail & Related papers (2024-07-08T19:33:17Z) - Maia: A Real-time Non-Verbal Chat for Human-AI Interaction [11.558827428811385]
We propose an alternative to text chats for Human-AI interaction, using facial expressions and head movements that mirror, but also improvise over the human user.
Our goal is to track and analyze facial expressions, and other non-verbal cues in real-time, and use this information to build models that can predict and understand human behavior.
arXiv Detail & Related papers (2024-02-09T13:07:22Z) - NITEC: Versatile Hand-Annotated Eye Contact Dataset for Ego-Vision
Interaction [2.594420805049218]
We present NITEC, a hand-annotated eye contact dataset for ego-vision interaction.
NITEC exceeds existing datasets for ego-vision eye contact in size and variety of demographics, social contexts, and lighting conditions.
We make our dataset publicly available to foster further exploration in the field of ego-vision interaction.
arXiv Detail & Related papers (2023-11-08T07:42:31Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Do Pedestrians Pay Attention? Eye Contact Detection in the Wild [75.54077277681353]
In urban environments, humans rely on eye contact for fast and efficient communication with nearby people.
In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians.
We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation achieves state-of-the-art results on the publicly-available dataset JAAD.
To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and un
arXiv Detail & Related papers (2021-12-08T10:21:28Z) - Can machines learn to see without visual databases? [93.73109506642112]
This paper focuses on developing machines that learn to see without needing to handle visual databases.
This might open the doors to a truly competitive track concerning deep learning technologies for vision.
arXiv Detail & Related papers (2021-10-12T13:03:54Z) - Automated analysis of eye-tracker-based human-human interaction studies [2.433293618209319]
We investigate which state-of-the-art computer vision algorithms may be used to automate the post-analysis of mobile eye-tracking data.
For the case study in this paper, we focus on mobile eye-tracker recordings made during human-human face-to-face interactions.
We show that the use of this single-pipeline framework provides robust results, which are both more accurate and faster than previous work in the field.
arXiv Detail & Related papers (2020-07-09T10:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.