GesSure -- A Robust Face-Authentication enabled Dynamic Gesture
Recognition GUI Application
- URL: http://arxiv.org/abs/2207.11033v1
- Date: Fri, 22 Jul 2022 12:14:35 GMT
- Title: GesSure -- A Robust Face-Authentication enabled Dynamic Gesture
Recognition GUI Application
- Authors: Ankit Jha, Ishita Pratham G. Shenwai, Ayush Batra, Siddharth Kotian,
Piyush Modi
- Abstract summary: This paper aims to design a robust, face-verification-enabled gesture recognition system.
We use meaningful and relevant gestures for task operation, resulting in a better user experience.
Our prototype has successfully executed context-dependent tasks like save, print, control video-player operations and exit, and context-free operating system tasks like sleep, shut-down, and unlock intuitively.
- Score: 1.3649494534428745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using physical interactive devices like mouse and keyboards hinders
naturalistic human-machine interaction and increases the probability of surface
contact during a pandemic. Existing gesture-recognition systems do not possess
user authentication, making them unreliable. Static gestures in current
gesture-recognition technology introduce long adaptation periods and reduce
user compatibility. Our technology places a strong emphasis on user recognition
and safety. We use meaningful and relevant gestures for task operation,
resulting in a better user experience. This paper aims to design a robust,
face-verification-enabled gesture recognition system that utilizes a graphical
user interface and primarily focuses on security through user recognition and
authorization. The face model uses MTCNN and FaceNet to verify the user, and
our LSTM-CNN architecture for gesture recognition, achieving an accuracy of 95%
with five classes of gestures. The prototype developed through our research has
successfully executed context-dependent tasks like save, print, control
video-player operations and exit, and context-free operating system tasks like
sleep, shut-down, and unlock intuitively. Our application and dataset are
available as open source.
Related papers
- Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition [5.13619372598999]
This paper introduces an innovative human-robot collaborative framework.
It seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy.
Experiment results have demonstrated superior performance in hand gesture recognition.
arXiv Detail & Related papers (2023-09-20T14:51:09Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Real-Time Gesture Recognition with Virtual Glove Markers [1.8352113484137629]
A real-time computer vision-based human-computer interaction tool for gesture recognition applications is proposed.
The system would be effective in real-time applications including social interaction through telepresence and rehabilitation.
arXiv Detail & Related papers (2022-07-06T14:56:08Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Snapture -- A Novel Neural Architecture for Combined Static and Dynamic
Hand Gesture Recognition [19.320551882950706]
We propose a novel hybrid hand gesture recognition system.
Our architecture enables learning both static and dynamic gestures.
Our work contributes both to gesture recognition research and machine learning applications for non-verbal communication with robots.
arXiv Detail & Related papers (2022-05-28T11:12:38Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement
Learning [91.58711082348293]
Reinforcement learning from online user feedback on the system's performance presents a natural solution to this problem.
This approach tends to require a large amount of human-in-the-loop training data, especially when feedback is sparse.
We propose a hierarchical solution that learns efficiently from sparse user feedback.
arXiv Detail & Related papers (2022-02-05T02:01:19Z) - Gestop : Customizable Gesture Control of Computer Systems [0.3553493344868413]
Gestop is a framework that learns to detect gestures from demonstrations and is customizable by end-users.
It enables users to interact in real-time with computers having only RGB cameras, using gestures.
arXiv Detail & Related papers (2020-10-25T19:13:01Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.