Gestop : Customizable Gesture Control of Computer Systems
- URL: http://arxiv.org/abs/2010.13197v1
- Date: Sun, 25 Oct 2020 19:13:01 GMT
- Title: Gestop : Customizable Gesture Control of Computer Systems
- Authors: Sriram Krishna, Nishant Sinha
- Abstract summary: Gestop is a framework that learns to detect gestures from demonstrations and is customizable by end-users.
It enables users to interact in real-time with computers having only RGB cameras, using gestures.
- Score: 0.3553493344868413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The established way of interfacing with most computer systems is a mouse and
keyboard. Hand gestures are an intuitive and effective touchless way to
interact with computer systems. However, hand gesture based systems have seen
low adoption among end-users primarily due to numerous technical hurdles in
detecting in-air gestures accurately. This paper presents Gestop, a framework
developed to bridge this gap. The framework learns to detect gestures from
demonstrations, is customizable by end-users and enables users to interact in
real-time with computers having only RGB cameras, using gestures.
Related papers
- ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis [50.69464138626748]
We present ConvoFusion, a diffusion-based approach for multi-modal gesture synthesis.
Our method proposes two guidance objectives that allow the users to modulate the impact of different conditioning modalities.
Our method is versatile in that it can be trained either for generating monologue gestures or even the conversational gestures.
arXiv Detail & Related papers (2024-03-26T17:59:52Z) - GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents [35.48323584634582]
We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures.
Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information.
We validated our framework offline under two real-world scenarios: smart home control and online video streaming.
arXiv Detail & Related papers (2023-10-19T15:17:34Z) - On-device Real-time Custom Hand Gesture Recognition [5.3581349005036465]
We present a user-friendly framework that lets users easily customize and deploy their own gesture recognition pipeline.
Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition.
We also offer a low-code solution to train and deploy the custom gesture recognition model.
arXiv Detail & Related papers (2023-09-19T18:05:14Z) - From Pixels to UI Actions: Learning to Follow Instructions via Graphical
User Interfaces [66.85108822706489]
This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use.
It is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
arXiv Detail & Related papers (2023-05-31T23:39:18Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT
Beyond Language [82.92236977726655]
InternGPT stands for textbfinteraction, textbfnonverbal, and textbfchatbots.
We present an interactive visual framework named InternGPT, or iGPT for short.
arXiv Detail & Related papers (2023-05-09T17:58:34Z) - GesSure -- A Robust Face-Authentication enabled Dynamic Gesture
Recognition GUI Application [1.3649494534428745]
This paper aims to design a robust, face-verification-enabled gesture recognition system.
We use meaningful and relevant gestures for task operation, resulting in a better user experience.
Our prototype has successfully executed context-dependent tasks like save, print, control video-player operations and exit, and context-free operating system tasks like sleep, shut-down, and unlock intuitively.
arXiv Detail & Related papers (2022-07-22T12:14:35Z) - Real-Time Gesture Recognition with Virtual Glove Markers [1.8352113484137629]
A real-time computer vision-based human-computer interaction tool for gesture recognition applications is proposed.
The system would be effective in real-time applications including social interaction through telepresence and rehabilitation.
arXiv Detail & Related papers (2022-07-06T14:56:08Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - MotionInput v2.0 supporting DirectX: A modular library of open-source
gesture-based machine learning and computer vision methods for interacting
and controlling existing software with a webcam [11.120698968989108]
MotionInput v2.0 maps human motion gestures to input operations for existing applications and games.
Three use case areas assisted the development of the modules: creativity software, office and clinical software, and gaming software.
arXiv Detail & Related papers (2021-08-10T08:23:21Z) - SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild [62.450907796261646]
Recognition of hand gestures can be performed directly from the stream of hand skeletons estimated by software.
Despite the recent advancements in gesture and action recognition from skeletons, it is unclear how well the current state-of-the-art techniques can perform in a real-world scenario.
This paper presents the results of the SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild contest.
arXiv Detail & Related papers (2021-06-21T10:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.