AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration
- URL: http://arxiv.org/abs/2309.11170v1
- Date: Wed, 20 Sep 2023 09:29:44 GMT
- Title: AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration
- Authors: Zheng Dang, Mathieu Salzmann
- Abstract summary: Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
- Score: 69.21282992341007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the current deep learning paradigm, the amount and quality of training
data are as critical as the network architecture and its training details.
However, collecting, processing, and annotating real data at scale is
difficult, expensive, and time-consuming, particularly for tasks such as 3D
object registration. While synthetic datasets can be created, they require
expertise to design and include a limited number of categories. In this paper,
we introduce a new approach called AutoSynth, which automatically generates 3D
training data for point cloud registration. Specifically, AutoSynth
automatically curates an optimal dataset by exploring a search space
encompassing millions of potential datasets with diverse 3D shapes at a low
cost.To achieve this, we generate synthetic 3D datasets by assembling shape
primitives, and develop a meta-learning strategy to search for the best
training data for 3D registration on real point clouds. For this search to
remain tractable, we replace the point cloud registration network with a much
smaller surrogate network, leading to a $4056.43$ times speedup. We demonstrate
the generality of our approach by implementing it with two different point
cloud registration networks, BPNet and IDAM. Our results on TUD-L, LINEMOD and
Occluded-LINEMOD evidence that a neural network trained on our searched dataset
yields consistently better performance than the same one trained on the widely
used ModelNet40 dataset.
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - MATE: Masked Autoencoders are Online 3D Test-Time Learners [63.3907730920114]
MATE is the first Test-Time-Training (TTT) method designed for 3D data.
It makes deep networks trained for point cloud classification robust to distribution shifts occurring in test data.
arXiv Detail & Related papers (2022-11-21T13:19:08Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - Self-Supervised Learning with Multi-View Rendering for 3D Point Cloud
Analysis [33.31864436614945]
We propose a novel pre-training method for 3D point cloud models.
Our pre-training is self-supervised by a local pixel/point level correspondence loss and a global image/point cloud level loss.
These improved models outperform existing state-of-the-art methods on various datasets and downstream tasks.
arXiv Detail & Related papers (2022-10-28T05:23:03Z) - What Can be Seen is What You Get: Structure Aware Point Cloud
Augmentation [0.966840768820136]
We present novel point cloud augmentation methods to artificially diversify a dataset.
Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities.
We show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.
arXiv Detail & Related papers (2022-06-20T09:10:59Z) - Continual learning on 3D point clouds with random compressed rehearsal [10.667104977730304]
This work proposes a novel neural network architecture capable of continual learning on 3D point cloud data.
We utilize point cloud structure properties for preserving a heavily compressed set of past data.
arXiv Detail & Related papers (2022-05-16T22:59:52Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models [0.0]
At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
arXiv Detail & Related papers (2020-08-21T18:50:42Z) - Neural Data Server: A Large-Scale Search Engine for Transfer Learning
Data [78.74367441804183]
We introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain.
NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client.
We show the effectiveness of NDS in various transfer learning scenarios, demonstrating state-of-the-art performance on several target datasets.
arXiv Detail & Related papers (2020-01-09T01:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.