Facial gesture interfaces for expression and communication
- URL: http://arxiv.org/abs/2010.01567v1
- Date: Sun, 4 Oct 2020 12:51:48 GMT
- Title: Facial gesture interfaces for expression and communication
- Authors: Michael J. Lyons
- Abstract summary: Review of projects on vision-based interfaces that rely on facial action for intentional human-computer interaction.
Applications to several domains are introduced, including text entry, artistic and musical expression and assistive technology for motor-impaired users.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considerable effort has been devoted to the automatic extraction of
information about action of the face from image sequences. Within the context
of human-computer interaction (HCI) we may distinguish systems that allow
expression from those which aim at recognition. Most of the work in facial
action processing has been directed at automatically recognizing affect from
facial actions. By contrast, facial gesture interfaces, which respond to
deliberate facial actions, have received comparatively little attention. This
paper reviews several projects on vision-based interfaces that rely on facial
action for intentional HCI. Applications to several domains are introduced,
including text entry, artistic and musical expression and assistive technology
for motor-impaired users.
Related papers
- Knowledge-Enhanced Facial Expression Recognition with Emotional-to-Neutral Transformation [66.53435569574135]
Existing facial expression recognition methods typically fine-tune a pre-trained visual encoder using discrete labels.
We observe that the rich knowledge in text embeddings, generated by vision-language models, is a promising alternative for learning discriminative facial expression representations.
We propose a novel knowledge-enhanced FER method with an emotional-to-neutral transformation.
arXiv Detail & Related papers (2024-09-13T07:28:57Z) - Leveraging Previous Facial Action Units Knowledge for Emotion
Recognition on Faces [2.4158349218144393]
We propose the usage of Facial Action Units (AUs) recognition techniques to recognize emotions.
This recognition will be based on the Facial Action Coding System (FACS) and computed by a machine learning system.
arXiv Detail & Related papers (2023-11-20T18:14:53Z) - SAFER: Situation Aware Facial Emotion Recognition [0.0]
We present SAFER, a novel system for emotion recognition from facial expressions.
It employs state-of-the-art deep learning techniques to extract various features from facial images.
It can adapt to unseen and varied facial expressions, making it suitable for real-world applications.
arXiv Detail & Related papers (2023-06-14T20:42:26Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Facial Action Unit Recognition Based on Transfer Learning [22.34261589991243]
We introduce a facial action unit recognition method based on transfer learning.
We first use available facial images with expression labels to train the feature extraction network.
arXiv Detail & Related papers (2022-03-25T04:01:58Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.