Enabling hand gesture customization on wrist-worn devices
- URL: http://arxiv.org/abs/2203.15239v1
- Date: Tue, 29 Mar 2022 05:12:32 GMT
- Title: Enabling hand gesture customization on wrist-worn devices
- Authors: Xuhai Xu, Jun Gong, Carolina Brum, Lilian Liang, Bongsoo Suh, Kumar
Gupta, Yash Agarwal, Laurence Lindsey, Runchang Kang, Behrooz Shahsavari, Tu
Nguyen, Heriberto Nieto, Scott E. Hudson, Charlie Maalouf, Seyed Mousavi,
Gierad Laput
- Abstract summary: We present a framework for gesture customization requiring minimal examples from users, all without degrading the performance of existing gesture sets.
Our approach paves the way for a future where users are no longer bound to pre-existing gestures, freeing them to creatively introduce new gestures tailored to their preferences and abilities.
- Score: 28.583516259577486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a framework for gesture customization requiring minimal examples
from users, all without degrading the performance of existing gesture sets. To
achieve this, we first deployed a large-scale study (N=500+) to collect data
and train an accelerometer-gyroscope recognition model with a cross-user
accuracy of 95.7% and a false-positive rate of 0.6 per hour when tested on
everyday non-gesture data. Next, we design a few-shot learning framework which
derives a lightweight model from our pre-trained model, enabling knowledge
transfer without performance degradation. We validate our approach through a
user study (N=20) examining on-device customization from 12 new gestures,
resulting in an average accuracy of 55.3%, 83.1%, and 87.2% on using one,
three, or five shots when adding a new gesture, while maintaining the same
recognition accuracy and false-positive rate from the pre-existing gesture set.
We further evaluate the usability of our real-time implementation with a user
experience study (N=20). Our results highlight the effectiveness, learnability,
and usability of our customization framework. Our approach paves the way for a
future where users are no longer bound to pre-existing gestures, freeing them
to creatively introduce new gestures tailored to their preferences and
abilities.
Related papers
- Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation [6.782362178252351]
We introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning framework.
Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge.
Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied.
arXiv Detail & Related papers (2024-05-14T21:20:27Z) - Boosting Gesture Recognition with an Automatic Gesture Annotation Framework [10.158684480548242]
We propose a framework that can automatically annotate gesture classes and identify their temporal ranges.
Our framework consists of two key components: (1) a novel annotation model that leverages the Connectionist Temporal Classification (CTC) loss, and (2) a semi-supervised learning pipeline.
These high-quality pseudo labels can also be used to enhance the accuracy of other downstream gesture recognition models.
arXiv Detail & Related papers (2024-01-20T07:11:03Z) - Towards Open-World Gesture Recognition [19.019579924491847]
In real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time.
We propose the use of continual learning to enable machine learning models to be adaptive to new tasks.
We provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.
arXiv Detail & Related papers (2024-01-20T06:45:16Z) - Boosting Visual-Language Models by Exploiting Hard Samples [126.35125029639168]
HELIP is a cost-effective strategy tailored to enhance the performance of existing CLIP models.
Our method allows for effortless integration with existing models' training pipelines.
On comprehensive benchmarks, HELIP consistently boosts existing models to achieve leading performance.
arXiv Detail & Related papers (2023-05-09T07:00:17Z) - Fast Learning of Dynamic Hand Gesture Recognition with Few-Shot Learning
Models [0.0]
We develop Few-Shot Learning models trained to recognize five or ten different dynamic hand gestures.
Models are arbitrarily interchangeable by providing the model with one, two, or five examples per hand gesture.
Result show accuracy of up to 88.8% for recognition of five and up to 81.2% for ten dynamic hand gestures.
arXiv Detail & Related papers (2022-12-16T09:31:15Z) - 3D Pose Based Feedback for Physical Exercises [87.35086507661227]
We introduce a learning-based framework that identifies the mistakes made by a user.
Our framework does not rely on hard-coded rules, instead, it learns them from data.
Our approach yields 90.9% mistake identification accuracy and successfully corrects 94.2% of the mistakes.
arXiv Detail & Related papers (2022-08-05T16:15:02Z) - A high performance fingerprint liveness detection method based on
quality related features [66.41574316136379]
The system is tested on a highly challenging database comprising over 10,500 real and fake images.
The proposed solution proves to be robust to the multi-scenario dataset, and presents an overall rate of 90% correctly classified samples.
arXiv Detail & Related papers (2021-11-02T21:09:39Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Transfer Learning for Human Activity Recognition using Representational
Analysis of Neural Networks [0.5898893619901381]
We propose a transfer learning framework for human activity recognition.
We show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning.
arXiv Detail & Related papers (2020-12-05T01:35:11Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - A Simple Framework for Contrastive Learning of Visual Representations [116.37752766922407]
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
We show that composition of data augmentations plays a critical role in defining effective predictive tasks.
We are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet.
arXiv Detail & Related papers (2020-02-13T18:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.