Motion-Based Handwriting Recognition
- URL: http://arxiv.org/abs/2101.06022v1
- Date: Fri, 15 Jan 2021 09:14:10 GMT
- Title: Motion-Based Handwriting Recognition
- Authors: Junshen Kevin Chen, Wanze Xie, Yutong He
- Abstract summary: We design a stylus equipped with motion sensor, and utilize gyroscopic and acceleration sensor reading to perform written letter classification.
We also explore various data augmentation techniques and their effects, reaching up to 86% accuracy.
- Score: 1.0742675209112622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We attempt to overcome the restriction of requiring a writing surface for
handwriting recognition. In this study, we design a prototype of a stylus
equipped with motion sensor, and utilizes gyroscopic and acceleration sensor
reading to perform written letter classification using various deep learning
techniques such as CNN and RNNs. We also explore various data augmentation
techniques and their effects, reaching up to 86% accuracy.
Related papers
- Air Signing and Privacy-Preserving Signature Verification for Digital Documents [0.0]
The proposed solution, referred to as "Air Signature," involves writing the signature in front of the camera.
The goal is to develop a state-of-the-art method for detecting and tracking gestures and objects in real-time.
arXiv Detail & Related papers (2024-05-17T16:00:10Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Automated dysgraphia detection by deep learning with SensoGrip [0.0]
Early detection of dysgraphia allows for an early start of a targeted intervention.
In this work, we investigated fine grading of handwriting capabilities by predicting SEMS score (between 0 and 12) with deep learning.
Our approach provide accuracy more than 99% and root mean square error lower than one, with automatic instead of manual feature extraction and selection.
arXiv Detail & Related papers (2022-10-14T09:21:27Z) - Letter-level Online Writer Identification [86.13203975836556]
We focus on a novel problem, letter-level online writer-id, which requires only a few trajectories of written letters as identification cues.
A main challenge is that a person often writes a letter in different styles from time to time.
We refer to this problem as the variance of online writing styles (Var-O-Styles)
arXiv Detail & Related papers (2021-12-06T07:21:53Z) - Digitizing Handwriting with a Sensor Pen: A Writer-Independent
Recognizer [0.2580765958706854]
This paper presents a writer-independent system that recognizes characters written on plain paper with the use of a sensor-equipped pen.
The pen provides linear acceleration, angular velocity, magnetic field, and force applied by the user, and acts as a digitizer that transforms the analogue signals of the sensors into time data while writing on regular paper.
We present the results of a convolutional neural network model for letter classification and show that this approach is practical and achieves promising results for writer-independent character recognition.
arXiv Detail & Related papers (2021-07-08T09:25:59Z) - Handwritten Digit Recognition using Machine and Deep Learning Algorithms [0.0]
We have performed handwritten digit recognition with the help of MNIST datasets using Support Vector Machines (SVM), Multi-Layer Perceptron (MLP) and Convolution Neural Network (CNN) models.
Our main objective is to compare the accuracy of the models stated above along with their execution time to get the best possible model for digit recognition.
arXiv Detail & Related papers (2021-06-23T18:23:01Z) - Towards an IMU-based Pen Online Handwriting Recognizer [2.6707647984082357]
We present a online handwriting recognition system for word recognition based on inertial measurement units (IMUs)
This is obtained by means of a sensor-equipped pen that provides acceleration, angular velocity, and magnetic forces streamed via Bluetooth.
Our model combines convolutional and bidirectional LSTM networks, and is trained with the Connectionist Temporal Classification loss.
arXiv Detail & Related papers (2021-05-26T09:47:19Z) - SmartPatch: Improving Handwritten Word Imitation with Patch
Discriminators [67.54204685189255]
We propose SmartPatch, a new technique increasing the performance of current state-of-the-art methods.
We combine the well-known patch loss with information gathered from the parallel trained handwritten text recognition system.
This leads to a more enhanced local discriminator and results in more realistic and higher-quality generated handwritten words.
arXiv Detail & Related papers (2021-05-21T18:34:21Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.