Towards Open-World Gesture Recognition
- URL: http://arxiv.org/abs/2401.11144v1
- Date: Sat, 20 Jan 2024 06:45:16 GMT
- Title: Towards Open-World Gesture Recognition
- Authors: Junxiao Shen, Matthias De Lange, Xuhai "Orson" Xu, Enmin Zhou, Ran
Tan, Naveen Suda, Maciej Lazarewicz, Per Ola Kristensson, Amy Karlson, Evan
Strasnick
- Abstract summary: We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR)
We propose a design engineering approach that enables offline analysis on a collected large-scale dataset with various parameters.
Design guidelines are provided to enhance the development of an open-world wrist-worn gesture recognition process.
- Score: 19.65242189269589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Static machine learning methods in gesture recognition assume that training
and test data come from the same underlying distribution. However, in
real-world applications involving gesture recognition on wrist-worn devices,
data distribution may change over time. We formulate this problem of adapting
recognition models to new tasks, where new data patterns emerge, as open-world
gesture recognition (OWGR). We propose leveraging continual learning to make
machine learning models adaptive to new tasks without degrading performance on
previously learned tasks. However, the exploration of parameters for questions
around when and how to train and deploy recognition models requires
time-consuming user studies and is sometimes impractical. To address this
challenge, we propose a design engineering approach that enables offline
analysis on a collected large-scale dataset with various parameters and
compares different continual learning methods. Finally, design guidelines are
provided to enhance the development of an open-world wrist-worn gesture
recognition process.
Related papers
- Deep self-supervised learning with visualisation for automatic gesture recognition [1.6647755388646919]
Gesture is an important mean of non-verbal communication, with visual modality allows human to convey information during interaction, facilitating peoples and human-machine interactions.
In this work, we explore three different means to recognise hand signs using deep learning: supervised learning based methods, self-supervised methods and visualisation based techniques applied to 3D moving skeleton data.
arXiv Detail & Related papers (2024-06-18T09:44:55Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Continuous Learning Based Novelty Aware Emotion Recognition System [0.0]
Current works in human emotion recognition follow the traditional closed learning approach governed by rigid rules without any consideration of novelty.
In this work, we propose a continuous learning based approach to deal with novelty in the automatic emotion recognition task.
arXiv Detail & Related papers (2023-06-14T20:34:07Z) - Continual Learning from Demonstration of Robotics Skills [5.573543601558405]
Methods for teaching motion skills to robots focus on training for a single skill at a time.
We propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers.
arXiv Detail & Related papers (2022-02-14T16:26:52Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - A case for new neural network smoothness constraints [34.373610792075205]
We show that model smoothness is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning.
We conclude that new advances in the field are hinging on finding ways to incorporate data, tasks and learning into our definitions of smoothness.
arXiv Detail & Related papers (2020-12-14T22:07:32Z) - A System for Real-Time Interactive Analysis of Deep Learning Training [66.06880335222529]
Currently available systems are limited to monitoring only the logged data that must be specified before the training process starts.
We present a new system that enables users to perform interactive queries on live processes generating real-time information.
arXiv Detail & Related papers (2020-01-05T11:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.