Learning to Score Sign Language with Two-stage Method
- URL: http://arxiv.org/abs/2404.10383v2
- Date: Wed, 17 Apr 2024 01:05:07 GMT
- Title: Learning to Score Sign Language with Two-stage Method
- Authors: Hongli Wen, Yang Xu,
- Abstract summary: We analyze the existing technologies for performance assessment and adopt methods that perform well in human pose reconstruction tasks combined with motion rotation embedded expressions.
Our method provides good score feedback mechanisms and high consistency with professional assessments compared to end-to-end evaluations.
- Score: 2.665036498336221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human action recognition and performance assessment have been hot research topics in recent years. Recognition problems have mature solutions in the field of sign language, but past research in performance analysis has focused on competitive sports and medical training, overlooking the scoring assessment ,which is an important part of sign language teaching digitalization. In this paper, we analyze the existing technologies for performance assessment and adopt methods that perform well in human pose reconstruction tasks combined with motion rotation embedded expressions, proposing a two-stage sign language performance evaluation pipeline. Our analysis shows that choosing reconstruction tasks in the first stage can provide more expressive features, and using smoothing methods can provide an effective reference for assessment. Experiments show that our method provides good score feedback mechanisms and high consistency with professional assessments compared to end-to-end evaluations.
Related papers
- A Review of Deep Learning Approaches for Non-Invasive Cognitive Impairment Detection [35.31259047578382]
This review paper explores recent advances in deep learning approaches for non-invasive cognitive impairment detection.
We examine various non-invasive indicators of cognitive decline, including speech and language, facial, and motoric mobility.
Despite significant progress, several challenges remain, including data standardization and accessibility, model explainability, longitudinal analysis limitations, and clinical adaptation.
arXiv Detail & Related papers (2024-10-25T17:44:59Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - Evaluating the IWSLT2023 Speech Translation Tasks: Human Annotations, Automatic Metrics, and Segmentation [50.60733773088296]
We conduct a comprehensive human evaluation of the results of several shared tasks from the last International Workshop on Spoken Language Translation (IWSLT 2023)
We propose an effective evaluation strategy based on automatic resegmentation and direct assessment with segment context.
Our analysis revealed that: 1) the proposed evaluation strategy is robust and scores well-correlated with other types of human judgements; 2) automatic metrics are usually, but not always, well-correlated with direct assessment scores; and 3) COMET as a slightly stronger automatic metric than chrF.
arXiv Detail & Related papers (2024-06-06T09:18:42Z) - Evaluation Framework for Feedback Generation Methods in Skeletal Movement Assessment [0.65268245109828]
We propose terminology and criteria for the classification, evaluation, and comparison of feedback generation solutions.
To our knowledge, this is the first work that formulates feedback generation in skeletal movement assessment.
arXiv Detail & Related papers (2024-04-14T21:14:47Z) - A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors [56.554277096170246]
We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection.
For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels.
arXiv Detail & Related papers (2023-05-15T16:02:56Z) - Impact of combining human and analytics feedback on students' engagement
with, and performance in, reflective writing tasks [3.4843936798388015]
This study proposes a personalised behavioural feedback intervention based on students' writing engagement analytics.
In a semester-long experimental study involving 81 postgraduate students, its impact on learning engagement and performance was studied.
arXiv Detail & Related papers (2022-11-15T15:41:26Z) - Estimating Presentation Competence using Multimodal Nonverbal Behavioral
Cues [7.340483819263093]
Public speaking and presentation competence plays an essential role in many areas of social interaction.
One approach that can promote efficient development of presentation competence is the automated analysis of human behavior during a speech.
In this work, we investigate the contribution of different nonverbal behavioral cues, namely, facial, body pose-based, and audio-related features, to estimate presentation competence.
arXiv Detail & Related papers (2021-05-06T13:09:41Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - A Review of Computational Approaches for Evaluation of Rehabilitation
Exercises [58.720142291102135]
This paper reviews computational approaches for evaluating patient performance in rehabilitation programs using motion capture systems.
The reviewed computational methods for exercise evaluation are grouped into three main categories: discrete movement score, rule-based, and template-based approaches.
arXiv Detail & Related papers (2020-02-29T22:18:56Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.