A Fine-Grained Visual Attention Approach for Fingerspelling Recognition
in the Wild
- URL: http://arxiv.org/abs/2105.07625v1
- Date: Mon, 17 May 2021 06:15:35 GMT
- Title: A Fine-Grained Visual Attention Approach for Fingerspelling Recognition
in the Wild
- Authors: Kamala Gajurel, Cuncong Zhong and Guanghui Wang
- Abstract summary: Automatic recognition of fingerspelling can help resolve communication barriers when interacting with deaf people.
Main challenges prevalent in fingerspelling recognition are the ambiguity in the gestures and strong articulation of the hands.
We propose a fine-grained visual attention mechanism using the Transformer model for the sequence-to-sequence prediction task in the wild dataset.
- Score: 17.8181080354116
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fingerspelling in sign language has been the means of communicating technical
terms and proper nouns when they do not have dedicated sign language gestures.
Automatic recognition of fingerspelling can help resolve communication barriers
when interacting with deaf people. The main challenges prevalent in
fingerspelling recognition are the ambiguity in the gestures and strong
articulation of the hands. The automatic recognition model should address high
inter-class visual similarity and high intra-class variation in the gestures.
Most of the existing research in fingerspelling recognition has focused on the
dataset collected in a controlled environment. The recent collection of a
large-scale annotated fingerspelling dataset in the wild, from social media and
online platforms, captures the challenges in a real-world scenario. In this
work, we propose a fine-grained visual attention mechanism using the
Transformer model for the sequence-to-sequence prediction task in the wild
dataset. The fine-grained attention is achieved by utilizing the change in
motion of the video frames (optical flow) in sequential context-based attention
along with a Transformer encoder model. The unsegmented continuous video
dataset is jointly trained by balancing the Connectionist Temporal
Classification (CTC) loss and the maximum-entropy loss. The proposed approach
can capture better fine-grained attention in a single iteration. Experiment
evaluations show that it outperforms the state-of-the-art approaches.
Related papers
- Leveraging Speech for Gesture Detection in Multimodal Communication [3.798147784987455]
Gestures are inherent to human interaction and often complement speech in face-to-face communication, forming a multimodal communication system.
Research on automatic gesture detection has primarily focused on visual and kinematic information to detect a limited set of isolated or silent gestures with low variability, neglecting the integration of speech and vision signals to detect gestures that co-occur with speech.
This work addresses this gap by focusing on co-speech gesture detection, emphasising the synchrony between speech and co-speech hand gestures.
arXiv Detail & Related papers (2024-04-23T11:54:05Z) - Toward American Sign Language Processing in the Real World: Data, Tasks,
and Methods [15.77894358993113]
I study automatic sign language processing in the wild, using signing videos collected from the Internet.
I present three new large-scale ASL datasets in the wild: ChicagoFSWild, ChicagoFSWild+, and OpenASL.
I propose two tasks for building real-world fingerspelling-based applications: fingerspelling detection and search.
arXiv Detail & Related papers (2023-08-23T20:38:19Z) - Co-Speech Gesture Detection through Multi-Phase Sequence Labeling [3.924524252255593]
We introduce a novel framework that reframes the task as a multi-phase sequence labeling problem.
We evaluate our proposal on a large dataset of diverse co-speech gestures in task-oriented face-to-face dialogues.
arXiv Detail & Related papers (2023-08-21T12:27:18Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign
Language Understanding [132.78015553111234]
Hand gesture serves as a crucial role during the expression of sign language.
Current deep learning based methods for sign language understanding (SLU) are prone to over-fitting due to insufficient sign data resource.
We propose the first self-supervised pre-trainable SignBERT+ framework with model-aware hand prior incorporated.
arXiv Detail & Related papers (2023-05-08T17:16:38Z) - Joint-bone Fusion Graph Convolutional Network for Semi-supervised
Skeleton Action Recognition [65.78703941973183]
We propose a novel correlation-driven joint-bone fusion graph convolutional network (CD-JBF-GCN) as an encoder and use a pose prediction head as a decoder.
Specifically, the CD-JBF-GC can explore the motion transmission between the joint stream and the bone stream.
The pose prediction based auto-encoder in the self-supervised training stage allows the network to learn motion representation from unlabeled data.
arXiv Detail & Related papers (2022-02-08T16:03:15Z) - FILIP: Fine-grained Interactive Language-Image Pre-Training [106.19474076935363]
Fine-grained Interactive Language-Image Pre-training achieves finer-level alignment through a cross-modal late interaction mechanism.
We construct a new large-scale image-text pair dataset called FILIP300M for pre-training.
Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks.
arXiv Detail & Related papers (2021-11-09T17:15:38Z) - Joint Visual Semantic Reasoning: Multi-Stage Decoder for Text
Recognition [36.12001394921506]
State-of-the-art (SOTA) models still struggle in the wild scenarios due to complex backgrounds, varying fonts, uncontrolled illuminations, distortions and other artefacts.
This is because such models solely depend on visual information for text recognition, thus lacking semantic reasoning capabilities.
We propose a multi-stage multi-scale attentional decoder that performs joint visual-semantic reasoning.
arXiv Detail & Related papers (2021-07-26T10:15:14Z) - Fingerspelling Detection in American Sign Language [32.79935314131377]
We consider the task of fingerspelling detection in raw, untrimmed sign language videos.
This is an important step towards building real-world fingerspelling recognition systems.
We propose a benchmark and a suite of evaluation metrics, some of which reflect the effect of detection on the downstream fingerspelling recognition task.
arXiv Detail & Related papers (2021-04-03T02:11:09Z) - Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain
Detection [60.88952532574564]
This paper conducts a thorough comparison of out-of-domain intent detection methods.
We evaluate multiple contextual encoders and methods, proven to be efficient, on three standard datasets for intent classification.
Our main findings show that fine-tuning Transformer-based encoders on in-domain data leads to superior results.
arXiv Detail & Related papers (2021-01-11T09:10:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.