It's all about you: Personalized in-Vehicle Gesture Recognition with a
Time-of-Flight Camera
- URL: http://arxiv.org/abs/2310.01659v1
- Date: Mon, 2 Oct 2023 21:48:19 GMT
- Title: It's all about you: Personalized in-Vehicle Gesture Recognition with a
Time-of-Flight Camera
- Authors: Amr Gomaa, Guillermo Reyes, Michael Feld
- Abstract summary: We propose a model-adaptation approach to personalize the training of a CNNLSTM model.
Our approach contributes to the field of dynamic hand gesture recognition while driving.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant advances in gesture recognition technology, recognizing
gestures in a driving environment remains challenging due to limited and costly
data and its dynamic, ever-changing nature. In this work, we propose a
model-adaptation approach to personalize the training of a CNNLSTM model and
improve recognition accuracy while reducing data requirements. Our approach
contributes to the field of dynamic hand gesture recognition while driving by
providing a more efficient and accurate method that can be customized for
individual users, ultimately enhancing the safety and convenience of in-vehicle
interactions, as well as driver's experience and system trust. We incorporate
hardware enhancement using a time-of-flight camera and algorithmic enhancement
through data augmentation, personalized adaptation, and incremental learning
techniques. We evaluate the performance of our approach in terms of recognition
accuracy, achieving up to 90\%, and show the effectiveness of personalized
adaptation and incremental learning for a user-centered design.
Related papers
- Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration [0.0]
Human-Robot Collaboration (HRC) is vital in Industry 4.0, using sensors, digital twins, collaborative robots (cobots) and intention-recognition models to have efficient manufacturing processes.
We address concept drift by integrating Adaptive Intelligence and self-labeling to improve the resilience of intention-recognition in an HRC system.
arXiv Detail & Related papers (2024-09-30T01:25:48Z) - UniLearn: Enhancing Dynamic Facial Expression Recognition through Unified Pre-Training and Fine-Tuning on Images and Videos [83.48170683672427]
UniLearn is a unified learning paradigm that integrates static facial expression recognition data to enhance DFER task.
UniLearn consistently state-of-the-art performance on FERV39K, MAFW, and DFEW benchmarks, with weighted average recall (WAR) of 53.65%, 58.44%, and 76.68%, respectively.
arXiv Detail & Related papers (2024-09-10T01:57:57Z) - Decoupled Prompt-Adapter Tuning for Continual Activity Recognition [6.224769485481242]
Action recognition technology plays a vital role in enhancing security through surveillance systems, enabling better patient monitoring in healthcare, and facilitating seamless human-AI collaboration in domains such as manufacturing and assistive technologies.
We propose Decoupled Prompt-Adapter Tuning (DPAT), a novel framework that integrates adapters for capturing spatial-temporal information and learnable prompts for mitigating catastrophic forgetting through a decoupled training strategy.
DPAT consistently achieves state-of-the-art performance across several challenging action recognition benchmarks, thus demonstrating the effectiveness of our model in the domain of continual action recognition.
arXiv Detail & Related papers (2024-07-20T08:56:04Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Agile gesture recognition for low-power applications: customisation for generalisation [41.728933551492275]
Automated hand gesture recognition has long been a focal point in the AI community.
There is an increasing demand for gesture recognition technologies that operate on low-power sensor devices.
In this study, we unveil a novel methodology for pattern recognition systems using adaptive and agile error correction.
arXiv Detail & Related papers (2024-03-12T19:34:18Z) - Towards Open-World Gesture Recognition [19.019579924491847]
In real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time.
We propose the use of continual learning to enable machine learning models to be adaptive to new tasks.
We provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.
arXiv Detail & Related papers (2024-01-20T06:45:16Z) - Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition [5.13619372598999]
This paper introduces an innovative human-robot collaborative framework.
It seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy.
Experiment results have demonstrated superior performance in hand gesture recognition.
arXiv Detail & Related papers (2023-09-20T14:51:09Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.