Handwritten Character Recognition from Wearable Passive RFID
- URL: http://arxiv.org/abs/2008.02543v1
- Date: Thu, 6 Aug 2020 09:45:29 GMT
- Title: Handwritten Character Recognition from Wearable Passive RFID
- Authors: Leevi Raivio, Han He, Johanna Virkki, Heikki Huttunen
- Abstract summary: We propose a preprocessing pipeline that fuses the sequence and bitmap representations together.
The data is collected from ten subjects containing altogether 7500 characters.
The proposed model reaches 72% accuracy in experimental tests, which can be considered good accuracy for this challenging dataset.
- Score: 1.3190581566723918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we study the recognition of handwritten characters from data
captured by a novel wearable electro-textile sensor panel. The data is
collected sequentially, such that we record both the stroke order and the
resulting bitmap. We propose a preprocessing pipeline that fuses the sequence
and bitmap representations together. The data is collected from ten subjects
containing altogether 7500 characters. We also propose a convolutional neural
network architecture, whose novel upsampling structure enables successful use
of conventional ImageNet pretrained networks, despite the small input size of
only 10x10 pixels. The proposed model reaches 72\% accuracy in experimental
tests, which can be considered good accuracy for this challenging dataset. Both
the data and the model are released to the public.
Related papers
- Mero Nagarikta: Advanced Nepali Citizenship Data Extractor with Deep Learning-Powered Text Detection and OCR [0.0]
This work proposes a robust system using YOLOv8 for accurate text object detection and an OCR algorithm based on Optimized PyTesseract.
The system, implemented within the context of a mobile application, allows for the automated extraction of important textual information.
The tested PyTesseract optimized for Nepali characters outperformed the standard OCR regarding flexibility and accuracy.
arXiv Detail & Related papers (2024-10-08T06:29:08Z) - Flowmind2Digital: The First Comprehensive Flowmind Recognition and
Conversion Approach [57.00892368627367]
Flowcharts and mind maps, collectively known as flowmind, are vital in daily activities, with hand-drawn versions facilitating real-time collaboration.
Existing sketch recognition methods face limitations in practical situations, being field-specific and lacking digital conversion steps.
Our paper introduces the Flowmind2digital method and hdFlowmind dataset to address these challenges.
arXiv Detail & Related papers (2024-01-08T09:05:20Z) - Persis: A Persian Font Recognition Pipeline Using Convolutional Neural
Networks [2.239394800147746]
We introduce the first publicly available datasets in the field of Persian font recognition.
We employ Convolutional Neural Networks (CNN) to address this problem.
We conclude that CNN methods can be used to recognize Persian fonts without the need for additional pre-processing steps.
arXiv Detail & Related papers (2023-10-08T18:07:15Z) - Writer Recognition Using Off-line Handwritten Single Block Characters [59.17685450892182]
We use personal identity numbers consisting of the six digits of the date of birth, DoB.
We evaluate two recognition approaches, one based on handcrafted features that compute directional measurements, and another based on deep features from a ResNet50 model.
Results show the presence of identity-related information in a piece of handwritten information as small as six digits with the DoB.
arXiv Detail & Related papers (2022-01-25T23:04:10Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Training dataset generation for bridge game registration [0.0]
The solution allows to skip the time-consuming processes of manual image collecting and labelling recognised objects.
The YOLOv4 network trained on the generated dataset achieved an efficiency of 99.8% in the cards detection task.
arXiv Detail & Related papers (2021-09-24T10:09:36Z) - Detecting Handwritten Mathematical Terms with Sensor Based Data [71.84852429039881]
We propose a solution to the UbiComp 2021 Challenge by Stabilo in which handwritten mathematical terms are supposed to be automatically classified.
The input data set contains data of different writers, with label strings constructed from a total of 15 different possible characters.
arXiv Detail & Related papers (2021-09-12T19:33:34Z) - Object Detection Based Handwriting Localization [2.6641834518599308]
We present an object detection based approach to localize handwritten regions from documents.
The proposed approach is also expected to facilitate other tasks such as handwriting recognition and signature verification.
arXiv Detail & Related papers (2021-06-28T21:25:20Z) - Print Error Detection using Convolutional Neural Networks [0.0]
We propose a way to generate a print error sample artificially.
Our final trained network gives a remarkable accuracy of 99.83% in testing.
arXiv Detail & Related papers (2021-04-11T16:30:17Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Using Text to Teach Image Retrieval [47.72498265721957]
We build on the concept of image manifold to represent the feature space of images, learned via neural networks, as a graph.
We augment the manifold samples with geometrically aligned text, thereby using a plethora of sentences to teach us about images.
The experimental results show that the joint embedding manifold is a robust representation, allowing it to be a better basis to perform image retrieval.
arXiv Detail & Related papers (2020-11-19T16:09:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.