DeepScanner: a Robotic System for Automated 2D Object Dataset Collection
with Annotations
- URL: http://arxiv.org/abs/2108.02555v1
- Date: Thu, 5 Aug 2021 12:21:18 GMT
- Title: DeepScanner: a Robotic System for Automated 2D Object Dataset Collection
with Annotations
- Authors: Valery Ilin, Ivan Kalinov, Pavel Karpyshev, Dzmitry Tsetserukou
- Abstract summary: We describe the possibility of automated dataset collection using an articulated robot.
The proposed technology reduces the number of pixel errors on a polygonal dataset and the time spent on manual labeling of 2D objects.
- Score: 4.0423807111935295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the proposed study, we describe the possibility of automated dataset
collection using an articulated robot. The proposed technology reduces the
number of pixel errors on a polygonal dataset and the time spent on manual
labeling of 2D objects. The paper describes a novel automatic dataset
collection and annotation system, and compares the results of automated and
manual dataset labeling. Our approach increases the speed of data labeling
240-fold, and improves the accuracy compared to manual labeling 13-fold. We
also present a comparison of metrics for training a neural network on a
manually annotated and an automatically collected dataset.
Related papers
- TrajSSL: Trajectory-Enhanced Semi-Supervised 3D Object Detection [59.498894868956306]
Pseudo-labeling approaches to semi-supervised learning adopt a teacher-student framework.
We leverage pre-trained motion-forecasting models to generate object trajectories on pseudo-labeled data.
Our approach improves pseudo-label quality in two distinct manners.
arXiv Detail & Related papers (2024-09-17T05:35:00Z) - Jacquard V2: Refining Datasets using the Human In the Loop Data
Correction Method [8.588472253340859]
We propose utilizing a Human-In-The-Loop(HIL) method to enhance dataset quality.
This approach relies on backbone deep learning networks to predict object positions and orientations for robotic grasping.
Images lacking labels are augmented with valid grasp bounding box information, whereas images afflicted by catastrophic labeling errors are completely removed.
arXiv Detail & Related papers (2024-02-08T15:32:22Z) - Automated Multimodal Data Annotation via Calibration With Indoor
Positioning System [0.0]
Our method uses an indoor positioning system (IPS) to produce accurate detection labels for both point clouds and images.
In an experiment, the system annotates objects of interest 261.8 times faster than a human baseline.
arXiv Detail & Related papers (2023-12-06T16:54:24Z) - LABELMAKER: Automatic Semantic Label Generation from RGB-D Trajectories [59.14011485494713]
This work introduces a fully automated 2D/3D labeling framework that can generate labels for RGB-D scans at equal (or better) level of accuracy.
We demonstrate the effectiveness of our LabelMaker pipeline by generating significantly better labels for the ScanNet datasets and automatically labelling the previously unlabeled ARKitScenes dataset.
arXiv Detail & Related papers (2023-11-20T20:40:24Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Automatically Prepare Training Data for YOLO Using Robotic In-Hand
Observation and Synthesis [14.034128227585143]
We propose combining robotic in-hand observation and data synthesis to enlarge the limited data set collected by the robot.
The collected and synthetic images are combined to train a deep detection neural network.
The results showed that combined observation and synthetic images led to comparable performance to manual data preparation.
arXiv Detail & Related papers (2023-01-04T04:20:08Z) - A Comparison of Automatic Labelling Approaches for Sentiment Analysis [1.7205106391379026]
The accuracy of supervised machine learning models is strongly related to the quality of the labelled data on which they train.
We have compared three automatic sentiment labelling techniques: TextBlob, Vader, and Afinn.
Results show that the Afinn labelling technique obtains the highest accuracy of 80.17% (DS-1) and 80.05% (DS-2) using a BiLSTM deep learning model.
arXiv Detail & Related papers (2022-11-05T21:41:44Z) - AutoGeoLabel: Automated Label Generation for Geospatial Machine Learning [69.47585818994959]
We evaluate a big data processing pipeline to auto-generate labels for remote sensing data.
We utilize the big geo-data platform IBM PAIRS to dynamically generate such labels in dense urban areas.
arXiv Detail & Related papers (2022-01-31T20:02:22Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.