TANet++: Triple Attention Network with Filtered Pointcloud on 3D
Detection
- URL: http://arxiv.org/abs/2106.15366v1
- Date: Sat, 26 Jun 2021 16:56:35 GMT
- Title: TANet++: Triple Attention Network with Filtered Pointcloud on 3D
Detection
- Authors: Cong Ma
- Abstract summary: TANet is one of state-of-the-art 3D object detection method on KITTI and JRDB benchmark.
In this paper, we propose TANet++ to improve the performance on 3D Detection.
In order to reduce the negative impact by the weak samples, the training strategy previously filtered the training data, and then the TANet++ is trained by the rest of data.
- Score: 7.64943832687184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: TANet is one of state-of-the-art 3D object detection method on KITTI and JRDB
benchmark, the network contains a Triple Attention module and Coarse-to-Fine
Regression module to improve the robustness and accuracy of 3D Detection.
However, since the original input data (point clouds) contains a lot of noise
during collecting the data, which will further affect the training of the
model. For example, the object is far from the robot, the sensor is difficult
to obtain enough pointcloud. If the objects only contains few point clouds, and
the samples are fed into model with the normal samples together during
training, the detector will be difficult to distinguish the individual with few
pointcloud belong to object or background. In this paper, we propose TANet++ to
improve the performance on 3D Detection, which adopt a novel training strategy
on training the TANet. In order to reduce the negative impact by the weak
samples, the training strategy previously filtered the training data, and then
the TANet++ is trained by the rest of data. The experimental results shows that
AP score of TANet++ is 8.98\% higher than TANet on JRDB benchmark.
Related papers
- AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Weakly Supervised Point Clouds Transformer for 3D Object Detection [4.723682216326063]
We present a framework for the weakly supervision of a point clouds transformer that is used for 3D object detection.
The aim is to decrease the required amount of supervision needed for training, as a result of the high cost of annotating a 3D datasets.
On the challenging KITTI datasets, the experimental results have achieved the highest level of average precision.
arXiv Detail & Related papers (2023-09-08T03:56:34Z) - Hierarchical Supervision and Shuffle Data Augmentation for 3D
Semi-Supervised Object Detection [90.32180043449263]
State-of-the-art 3D object detectors are usually trained on large-scale datasets with high-quality 3D annotations.
A natural remedy is to adopt semi-supervised learning (SSL) by leveraging a limited amount of labeled samples and abundant unlabeled samples.
This paper introduces a novel approach of Hierarchical Supervision and Shuffle Data Augmentation (HSSDA), which is a simple yet effective teacher-student framework.
arXiv Detail & Related papers (2023-04-04T02:09:32Z) - MATE: Masked Autoencoders are Online 3D Test-Time Learners [63.3907730920114]
MATE is the first Test-Time-Training (TTT) method designed for 3D data.
It makes deep networks trained for point cloud classification robust to distribution shifts occurring in test data.
arXiv Detail & Related papers (2022-11-21T13:19:08Z) - Teacher-Student Network for 3D Point Cloud Anomaly Detection with Few
Normal Samples [21.358496646676087]
We design a teacher-student structured model for 3D anomaly detection.
Specifically, we use feature space alignment, dimension zoom, and max pooling to extract the features of the point cloud.
Our method only requires very few normal samples to train the student network.
arXiv Detail & Related papers (2022-10-31T12:29:55Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - Self-Supervised Pretraining of 3D Features on any Point-Cloud [40.26575888582241]
We present a simple self-supervised pertaining method that can work with any 3D data without 3D registration.
We evaluate our models on 9 benchmarks for object detection, semantic segmentation, and object classification, where they achieve state-of-the-art results and can outperform supervised pretraining.
arXiv Detail & Related papers (2021-01-07T18:55:21Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.