PU-Transformer: Point Cloud Upsampling Transformer
- URL: http://arxiv.org/abs/2111.12242v1
- Date: Wed, 24 Nov 2021 03:25:35 GMT
- Title: PU-Transformer: Point Cloud Upsampling Transformer
- Authors: Shi Qiu, Saeed Anwar, Nick Barnes
- Abstract summary: We focus on the point cloud upsampling task that intends to generate dense high-fidelity point clouds from sparse input data.
Specifically, to activate the transformer's strong capability in representing features, we develop a new variant of a multi-head self-attention structure.
We demonstrate the outstanding performance of our approach by comparing with the state-of-the-art CNN-based methods on different benchmarks.
- Score: 38.05362492645094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the rapid development of 3D scanners, point clouds are becoming popular
in AI-driven machines. However, point cloud data is inherently sparse and
irregular, causing major difficulties for machine perception. In this work, we
focus on the point cloud upsampling task that intends to generate dense
high-fidelity point clouds from sparse input data. Specifically, to activate
the transformer's strong capability in representing features, we develop a new
variant of a multi-head self-attention structure to enhance both point-wise and
channel-wise relations of the feature map. In addition, we leverage a
positional fusion block to comprehensively capture the local context of point
cloud data, providing more position-related information about the scattered
points. As the first transformer model introduced for point cloud upsampling,
we demonstrate the outstanding performance of our approach by comparing with
the state-of-the-art CNN-based methods on different benchmarks quantitatively
and qualitatively.
Related papers
- Rendering-Oriented 3D Point Cloud Attribute Compression using Sparse Tensor-based Transformer [52.40992954884257]
3D visualization techniques have fundamentally transformed how we interact with digital content.
Massive data size of point clouds presents significant challenges in data compression.
We propose an end-to-end deep learning framework that seamlessly integrates PCAC with differentiable rendering.
arXiv Detail & Related papers (2024-11-12T16:12:51Z) - PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - NoiseTrans: Point Cloud Denoising with Transformers [4.143032261649984]
We design a novel model, NoiseTrans, which uses transformer encoder architecture for point cloud denoising.
We obtain structural similarity of point-based point clouds with the assistance of the transformer's core self-attention mechanism.
Experiments show that our model outperforms state-of-the-art methods in various datasets and noise environments.
arXiv Detail & Related papers (2023-04-24T04:01:23Z) - Self-positioning Point-based Transformer for Point Cloud Understanding [18.394318824968263]
Self-Positioning point-based Transformer (SPoTr) is designed to capture both local and global shape contexts with reduced complexity.
SPoTr achieves an accuracy gain of 2.6% over the previous best models on shape classification with ScanObjectNN.
arXiv Detail & Related papers (2023-03-29T04:27:11Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.