Effective Gesture Based Framework for Capturing User Input
- URL: http://arxiv.org/abs/2208.00913v1
- Date: Mon, 1 Aug 2022 14:58:17 GMT
- Title: Effective Gesture Based Framework for Capturing User Input
- Authors: Pabbathi Sri Charan, Saksham Gupta, Satvik Agrawal, Gadupudi Sahithi
Sindhu
- Abstract summary: Users of virtual keyboards can type on any surface as if it were a keyboard thanks to sensor technology and artificial intelligence.
A camera is used to capture keyboard images and finger movements which subsequently acts as a virtual keyboard.
A visible virtual mouse that accepts finger coordinates as input is also described in this study.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computers today aren't just confined to laptops and desktops. Mobile gadgets
like mobile phones and laptops also make use of it. However, one input device
that hasn't changed in the last 50 years is the QWERTY keyboard. Users of
virtual keyboards can type on any surface as if it were a keyboard thanks to
sensor technology and artificial intelligence. In this research, we use the
idea of image processing to create an application for seeing a computer
keyboard using a novel framework which can detect hand gestures with precise
accuracy while also being sustainable and financially viable. A camera is used
to capture keyboard images and finger movements which subsequently acts as a
virtual keyboard. In addition, a visible virtual mouse that accepts finger
coordinates as input is also described in this study. This system has a direct
benefit of reducing peripheral cost, reducing electronics waste generated due
to external devices and providing accessibility to people who cannot use the
traditional keyboard and mouse.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - TapType: Ten-finger text entry on everyday surfaces via Bayesian inference [32.33746932895968]
TapType is a mobile text entry system for full-size typing on passive surfaces.
From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout.
arXiv Detail & Related papers (2024-10-08T12:58:31Z) - Improve accessibility for Low Vision and Blind people using Machine Learning and Computer Vision [0.0]
This project explores how machine learning and computer vision could be utilized to improve accessibility for people with visual impairments.
This project will concentrate on building a mobile application that helps blind people to orient in space by receiving audio and haptic feedback.
arXiv Detail & Related papers (2024-03-24T21:19:17Z) - Typing on Any Surface: A Deep Learning-based Method for Real-Time
Keystroke Detection in Augmented Reality [4.857109990499532]
Mid-air keyboard interface, wireless keyboards or voice input, either suffer from poor ergonomic design, limited accuracy, or are simply embarrassing to use in public.
This paper proposes and validates a deep-learning based approach, that enables AR applications to accurately predict keystrokes from the user perspective RGB video stream.
A two-stage model, combing an off-the-shelf hand landmark extractor and a novel adaptive Convolutional Recurrent Neural Network (C-RNN) was trained.
arXiv Detail & Related papers (2023-08-31T23:58:25Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Rotating without Seeing: Towards In-hand Dexterity through Touch [43.87509744768282]
We present Touch Dexterity, a new system that can perform in-hand object rotation using only touching without seeing the object.
Instead of relying on precise tactile sensing in a small region, we introduce a new system design using dense binary force sensors (touch or no touch) overlaying one side of the whole robot hand.
We train an in-hand rotation policy using Reinforcement Learning on diverse objects in simulation. Relying on touch-only sensing, we can directly deploy the policy in a real robot hand and rotate novel objects that are not presented in training.
arXiv Detail & Related papers (2023-03-20T05:38:30Z) - PRNU Based Source Camera Identification for Webcam and Smartphone Videos [137.6408511310322]
This communication is about an application of image forensics where we use camera sensor fingerprints to identify source camera (SCI: Source Camera Identification) in webcam/smartphone videos.
arXiv Detail & Related papers (2022-01-27T18:57:14Z) - MotionInput v2.0 supporting DirectX: A modular library of open-source
gesture-based machine learning and computer vision methods for interacting
and controlling existing software with a webcam [11.120698968989108]
MotionInput v2.0 maps human motion gestures to input operations for existing applications and games.
Three use case areas assisted the development of the modules: creativity software, office and clinical software, and gaming software.
arXiv Detail & Related papers (2021-08-10T08:23:21Z) - TypeNet: Deep Learning Keystroke Biometrics [77.80092630558305]
We introduce TypeNet, a Recurrent Neural Network trained with a moderate number of keystrokes per identity.
With 5 gallery sequences and test sequences of length 50, TypeNet achieves state-of-the-art keystroke biometric authentication performance.
Our experiments demonstrate a moderate increase in error with up to 100,000 subjects, demonstrating the potential of TypeNet to operate at an Internet scale.
arXiv Detail & Related papers (2021-01-14T12:49:09Z) - Gestop : Customizable Gesture Control of Computer Systems [0.3553493344868413]
Gestop is a framework that learns to detect gestures from demonstrations and is customizable by end-users.
It enables users to interact in real-time with computers having only RGB cameras, using gestures.
arXiv Detail & Related papers (2020-10-25T19:13:01Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.