Computer Vision-Driven Gesture Recognition: Toward Natural and Intuitive Human-Computer
- URL: http://arxiv.org/abs/2412.18321v1
- Date: Tue, 24 Dec 2024 10:13:20 GMT
- Title: Computer Vision-Driven Gesture Recognition: Toward Natural and Intuitive Human-Computer
- Authors: Fenghua Shao, Tong Zhang, Shang Gao, Qi Sun, Liuqingqing Yang,
- Abstract summary: This study explores the application of natural gesture recognition based on computer vision in human-computer interaction.
By connecting the palm and each finger joint, a dynamic and static gesture model of the hand is formed.
Experimental results show that this method can effectively recognize various gestures and maintain high recognition accuracy and real-time response capabilities.
- Score: 21.70275919660522
- License:
- Abstract: This study mainly explores the application of natural gesture recognition based on computer vision in human-computer interaction, aiming to improve the fluency and naturalness of human-computer interaction through gesture recognition technology. In the fields of virtual reality, augmented reality and smart home, traditional input methods have gradually failed to meet the needs of users for interactive experience. As an intuitive and convenient interaction method, gestures have received more and more attention. This paper proposes a gesture recognition method based on a three-dimensional hand skeleton model. By simulating the three-dimensional spatial distribution of hand joints, a simplified hand skeleton structure is constructed. By connecting the palm and each finger joint, a dynamic and static gesture model of the hand is formed, which further improves the accuracy and efficiency of gesture recognition. Experimental results show that this method can effectively recognize various gestures and maintain high recognition accuracy and real-time response capabilities in different environments. In addition, combined with multimodal technologies such as eye tracking, the intelligence level of the gesture recognition system can be further improved, bringing a richer and more intuitive user experience. In the future, with the continuous development of computer vision, deep learning and multimodal interaction technology, natural interaction based on gestures will play an important role in a wider range of application scenarios and promote revolutionary progress in human-computer interaction.
Related papers
- Interpreting Hand gestures using Object Detection and Digits Classification [0.0]
This research aims to develop a robust system that can accurately recognize and classify hand gestures representing numbers.
The proposed approach involves collecting a dataset of hand gesture images, preprocessing and enhancing the images, extracting relevant features, and training a machine learning model.
arXiv Detail & Related papers (2024-07-15T16:53:04Z) - Advancements in Tactile Hand Gesture Recognition for Enhanced Human-Machine Interaction [1.6385815610837167]
Motivated by the growing interest in enhancing intuitive physical Human-Machine Interaction (HRI/HVI), this study aims to propose a robust tactile hand gesture recognition system.
We performed a comprehensive evaluation of different hand gesture recognition approaches for a large area tactile sensing interface (touch interface) constructed from conductive textiles.
arXiv Detail & Related papers (2024-05-27T10:44:27Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Real-Time Gesture Recognition with Virtual Glove Markers [1.8352113484137629]
A real-time computer vision-based human-computer interaction tool for gesture recognition applications is proposed.
The system would be effective in real-time applications including social interaction through telepresence and rehabilitation.
arXiv Detail & Related papers (2022-07-06T14:56:08Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Snapture -- A Novel Neural Architecture for Combined Static and Dynamic
Hand Gesture Recognition [19.320551882950706]
We propose a novel hybrid hand gesture recognition system.
Our architecture enables learning both static and dynamic gestures.
Our work contributes both to gesture recognition research and machine learning applications for non-verbal communication with robots.
arXiv Detail & Related papers (2022-05-28T11:12:38Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - 3D dynamic hand gestures recognition using the Leap Motion sensor and
convolutional neural networks [0.0]
We present a method for the recognition of a set of non-static gestures acquired through the Leap Motion sensor.
The acquired gesture information is converted in color images, where the variation of hand joint positions during the gesture are projected on a plane.
The classification of the gestures is performed using a deep Convolutional Neural Network (CNN)
arXiv Detail & Related papers (2020-03-03T11:05:35Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.