Flowmind2Digital: The First Comprehensive Flowmind Recognition and
Conversion Approach
- URL: http://arxiv.org/abs/2401.03742v1
- Date: Mon, 8 Jan 2024 09:05:20 GMT
- Title: Flowmind2Digital: The First Comprehensive Flowmind Recognition and
Conversion Approach
- Authors: Huanyu Liu, Jianfeng Cai, Tingjia Zhang, Hongsheng Li, Siyuan Wang,
Guangming Zhu, Syed Afaq Ali Shah, Mohammed Bennamoun and Liang Zhang
- Abstract summary: Flowcharts and mind maps, collectively known as flowmind, are vital in daily activities, with hand-drawn versions facilitating real-time collaboration.
Existing sketch recognition methods face limitations in practical situations, being field-specific and lacking digital conversion steps.
Our paper introduces the Flowmind2digital method and hdFlowmind dataset to address these challenges.
- Score: 57.00892368627367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flowcharts and mind maps, collectively known as flowmind, are vital in daily
activities, with hand-drawn versions facilitating real-time collaboration.
However, there's a growing need to digitize them for efficient processing.
Automated conversion methods are essential to overcome manual conversion
challenges. Existing sketch recognition methods face limitations in practical
situations, being field-specific and lacking digital conversion steps. Our
paper introduces the Flowmind2digital method and hdFlowmind dataset to address
these challenges. Flowmind2digital, utilizing neural networks and keypoint
detection, achieves a record 87.3% accuracy on our dataset, surpassing previous
methods by 11.9%. The hdFlowmind dataset, comprising 1,776 annotated flowminds
across 22 scenarios, outperforms existing datasets. Additionally, our
experiments emphasize the importance of simple graphics, enhancing accuracy by
9.3%.
Related papers
- Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - Fully automated landmarking and facial segmentation on 3D photographs [0.0]
The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach.
Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer.
The workflow was successful in 98.6% of all test cases.
arXiv Detail & Related papers (2023-09-19T09:39:55Z) - Pruning Distorted Images in MNIST Handwritten Digits [0.0]
We propose a two-stage deep learning approach to recognize handwritten digits.
In the first stage, we create a simple neural network to identify distorted digits within the training set.
In the second stage, we exclude these identified images from the training dataset and proceed to retrain the model using the filtered dataset.
Our experimental results demonstrate the effectiveness of the proposed approach, achieving an accuracy rate of over 99.5% on the testing dataset.
arXiv Detail & Related papers (2023-05-26T11:44:35Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Real-time Action Recognition for Fine-Grained Actions and The Hand Wash
Dataset [0.0]
A three-stream fusion algorithm is proposed, which runs both accurately and efficiently in real-time on low-powered systems such as a Raspberry Pi.
The results achieved by this algorithm are benchmarked on the UCF-101 and the HMDB-51 datasets, achieving an accuracy of 92.7% and 64.9% respectively.
arXiv Detail & Related papers (2022-10-13T22:38:11Z) - What Stops Learning-based 3D Registration from Working in the Real
World? [53.68326201131434]
This work identifies the sources of 3D point cloud registration failures, analyze the reasons behind them, and propose solutions.
Ultimately, this translates to a best-practice 3D registration network (BPNet), constituting the first learning-based method able to handle previously-unseen objects in real-world data.
Our model generalizes to real data without any fine-tuning, reaching an accuracy of up to 67% on point clouds of unseen objects obtained with a commercial sensor.
arXiv Detail & Related papers (2021-11-19T19:24:27Z) - Handwritten Character Recognition from Wearable Passive RFID [1.3190581566723918]
We propose a preprocessing pipeline that fuses the sequence and bitmap representations together.
The data is collected from ten subjects containing altogether 7500 characters.
The proposed model reaches 72% accuracy in experimental tests, which can be considered good accuracy for this challenging dataset.
arXiv Detail & Related papers (2020-08-06T09:45:29Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z) - Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping [55.72376663488104]
We present an autoencoder-based approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data.
Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in the encoder.
We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings.
arXiv Detail & Related papers (2019-11-20T05:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.