Weakly Supervised Point Clouds Transformer for 3D Object Detection
- URL: http://arxiv.org/abs/2309.04105v1
- Date: Fri, 8 Sep 2023 03:56:34 GMT
- Title: Weakly Supervised Point Clouds Transformer for 3D Object Detection
- Authors: Zuojin Tang, Bo Sun, Tongwei Ma, Daosheng Li, Zhenhui Xu
- Abstract summary: We present a framework for the weakly supervision of a point clouds transformer that is used for 3D object detection.
The aim is to decrease the required amount of supervision needed for training, as a result of the high cost of annotating a 3D datasets.
On the challenging KITTI datasets, the experimental results have achieved the highest level of average precision.
- Score: 4.723682216326063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The annotation of 3D datasets is required for semantic-segmentation and
object detection in scene understanding. In this paper we present a framework
for the weakly supervision of a point clouds transformer that is used for 3D
object detection. The aim is to decrease the required amount of supervision
needed for training, as a result of the high cost of annotating a 3D datasets.
We propose an Unsupervised Voting Proposal Module, which learns randomly preset
anchor points and uses voting network to select prepared anchor points of high
quality. Then it distills information into student and teacher network. In
terms of student network, we apply ResNet network to efficiently extract local
characteristics. However, it also can lose much global information. To provide
the input which incorporates the global and local information as the input of
student networks, we adopt the self-attention mechanism of transformer to
extract global features, and the ResNet layers to extract region proposals. The
teacher network supervises the classification and regression of the student
network using the pre-trained model on ImageNet. On the challenging KITTI
datasets, the experimental results have achieved the highest level of average
precision compared with the most recent weakly supervised 3D object detectors.
Related papers
- M&M3D: Multi-Dataset Training and Efficient Network for Multi-view 3D
Object Detection [2.5158048364984564]
I proposed a network structure for multi-view 3D object detection using camera-only data and a Bird's-Eye-View map.
My work is based on a current key challenge domain adaptation and visual data transfer.
My study utilizes 3D information as available semantic information and 2D multi-view image features blending into the visual-language transfer design.
arXiv Detail & Related papers (2023-11-02T04:28:51Z) - Hierarchical Supervision and Shuffle Data Augmentation for 3D
Semi-Supervised Object Detection [90.32180043449263]
State-of-the-art 3D object detectors are usually trained on large-scale datasets with high-quality 3D annotations.
A natural remedy is to adopt semi-supervised learning (SSL) by leveraging a limited amount of labeled samples and abundant unlabeled samples.
This paper introduces a novel approach of Hierarchical Supervision and Shuffle Data Augmentation (HSSDA), which is a simple yet effective teacher-student framework.
arXiv Detail & Related papers (2023-04-04T02:09:32Z) - Structure Aware and Class Balanced 3D Object Detection on nuScenes
Dataset [0.0]
NuTonomy's nuScenes dataset greatly extends commonly used datasets such as KITTI.
The localization precision of this model is affected by the loss of spatial information in the downscaled feature maps.
We propose to enhance the performance of the CBGS model by designing an auxiliary network, that makes full use of the structure information of the 3D point cloud.
arXiv Detail & Related papers (2022-05-25T06:18:49Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - AttDLNet: Attention-based DL Network for 3D LiDAR Place Recognition [0.6352264764099531]
This paper proposes a novel 3D LiDAR-based deep learning network named AttDLNet.
It exploits an attention mechanism to selectively focus on long-range context and interfeature relationships.
Results show that the encoder network features are already very descriptive, but adding attention to the network further improves performance.
arXiv Detail & Related papers (2021-06-17T16:34:37Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - PIG-Net: Inception based Deep Learning Architecture for 3D Point Cloud
Segmentation [0.9137554315375922]
We propose a inception based deep network architecture called PIG-Net, that effectively characterizes the local and global geometric details of the point clouds.
We perform an exhaustive experimental analysis of the PIG-Net architecture on two state-of-the-art datasets.
arXiv Detail & Related papers (2021-01-28T13:27:55Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - Weakly Supervised 3D Object Detection from Point Clouds [27.70180601788613]
3D object detection aims to detect and localize the 3D bounding boxes of objects belonging to specific classes.
Existing 3D object detectors rely on annotated 3D bounding boxes during training, while these annotations could be expensive to obtain and only accessible in limited scenarios.
We propose VS3D, a framework for weakly supervised 3D object detection from point clouds without using any ground truth 3D bounding box for training.
arXiv Detail & Related papers (2020-07-28T03:30:11Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.