TapType: Ten-finger text entry on everyday surfaces via Bayesian inference
- URL: http://arxiv.org/abs/2410.06001v1
- Date: Tue, 8 Oct 2024 12:58:31 GMT
- Title: TapType: Ten-finger text entry on everyday surfaces via Bayesian inference
- Authors: Paul Streli, Jiaxi Jiang, Andreas Fender, Manuel Meier, Hugo Romat, Christian Holz,
- Abstract summary: TapType is a mobile text entry system for full-size typing on passive surfaces.
From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout.
- Score: 32.33746932895968
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the advent of touchscreens, typing on physical keyboards remains most efficient for entering text, because users can leverage all fingers across a full-size keyboard for convenient typing. As users increasingly type on the go, text input on mobile and wearable devices has had to compromise on full-size typing. In this paper, we present TapType, a mobile text entry system for full-size typing on passive surfaces--without an actual keyboard. From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout. The key novelty of our method is to predict the most likely character sequences by fusing the finger probabilities from our Bayesian neural network classifier with the characters' prior probabilities from an n-gram language model. In our online evaluation, participants on average typed 19 words per minute with a character error rate of 0.6% after 30 minutes of training. Expert typists thereby consistently achieved more than 25 WPM at a similar error rate. We demonstrate applications of TapType in mobile use around smartphones and tablets, as a complement to interaction in situated Mixed Reality outside visual control, and as an eyes-free mobile text input method using an audio feedback-only interface.
Related papers
- Acoustic Side Channel Attack on Keyboards Based on Typing Patterns [0.0]
Side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices.
This paper proposes an applicable method that takes into account the user's typing pattern in a realistic environment.
Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.
arXiv Detail & Related papers (2024-03-13T17:44:15Z) - OverHear: Headphone based Multi-sensor Keystroke Inference [1.9915929143641455]
We develop a keystroke inference framework that leverages both acoustic and accelerometer data from headphones.
We achieve top-5 key prediction accuracy of around 80% for mechanical keyboards and around 60% for membrane keyboards.
Results highlight the effectiveness and limitations of our approach in the context of real-world scenarios.
arXiv Detail & Related papers (2023-11-04T00:48:20Z) - Typing on Any Surface: A Deep Learning-based Method for Real-Time
Keystroke Detection in Augmented Reality [4.857109990499532]
Mid-air keyboard interface, wireless keyboards or voice input, either suffer from poor ergonomic design, limited accuracy, or are simply embarrassing to use in public.
This paper proposes and validates a deep-learning based approach, that enables AR applications to accurately predict keystrokes from the user perspective RGB video stream.
A two-stage model, combing an off-the-shelf hand landmark extractor and a novel adaptive Convolutional Recurrent Neural Network (C-RNN) was trained.
arXiv Detail & Related papers (2023-08-31T23:58:25Z) - Effective Gesture Based Framework for Capturing User Input [0.4588028371034407]
Users of virtual keyboards can type on any surface as if it were a keyboard thanks to sensor technology and artificial intelligence.
A camera is used to capture keyboard images and finger movements which subsequently acts as a virtual keyboard.
A visible virtual mouse that accepts finger coordinates as input is also described in this study.
arXiv Detail & Related papers (2022-08-01T14:58:17Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - TapNet: The Design, Training, Implementation, and Applications of a
Multi-Task Learning CNN for Off-Screen Mobile Input [75.05709030478073]
We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone.
TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location.
arXiv Detail & Related papers (2021-02-18T00:45:41Z) - TypeNet: Deep Learning Keystroke Biometrics [77.80092630558305]
We introduce TypeNet, a Recurrent Neural Network trained with a moderate number of keystrokes per identity.
With 5 gallery sequences and test sequences of length 50, TypeNet achieves state-of-the-art keystroke biometric authentication performance.
Our experiments demonstrate a moderate increase in error with up to 100,000 subjects, demonstrating the potential of TypeNet to operate at an Internet scale.
arXiv Detail & Related papers (2021-01-14T12:49:09Z) - TypeNet: Scaling up Keystroke Biometrics [79.19779718346128]
We first analyze to what extent our method based on a Recurrent Neural Network (RNN) is able to authenticate users when the amount of data per user is scarce.
With 1K users for testing the network, a population size comparable to previous works, TypeNet obtains an equal error rate of 4.8%.
Using the same amount of data per user, as the number of test users is scaled up to 100K, the performance in comparison to 1K decays relatively by less than 5%.
arXiv Detail & Related papers (2020-04-07T18:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.