NoiseTrans: Point Cloud Denoising with Transformers
- URL: http://arxiv.org/abs/2304.11812v1
- Date: Mon, 24 Apr 2023 04:01:23 GMT
- Title: NoiseTrans: Point Cloud Denoising with Transformers
- Authors: Guangzhe Hou, Guihe Qin, Minghui Sun, Yanhua Liang, Jie Yan, Zhonghan
Zhang
- Abstract summary: We design a novel model, NoiseTrans, which uses transformer encoder architecture for point cloud denoising.
We obtain structural similarity of point-based point clouds with the assistance of the transformer's core self-attention mechanism.
Experiments show that our model outperforms state-of-the-art methods in various datasets and noise environments.
- Score: 4.143032261649984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds obtained from capture devices or 3D reconstruction techniques
are often noisy and interfere with downstream tasks. The paper aims to recover
the underlying surface of noisy point clouds. We design a novel model,
NoiseTrans, which uses transformer encoder architecture for point cloud
denoising. Specifically, we obtain structural similarity of point-based point
clouds with the assistance of the transformer's core self-attention mechanism.
By expressing the noisy point cloud as a set of unordered vectors, we convert
point clouds into point embeddings and employ Transformer to generate clean
point clouds. To make the Transformer preserve details when sensing the point
cloud, we design the Local Point Attention to prevent the point cloud from
being over-smooth. In addition, we also propose sparse encoding, which enables
the Transformer to better perceive the structural relationships of the point
cloud and improve the denoising performance. Experiments show that our model
outperforms state-of-the-art methods in various datasets and noise
environments.
Related papers
- Rendering-Oriented 3D Point Cloud Attribute Compression using Sparse Tensor-based Transformer [52.40992954884257]
3D visualization techniques have fundamentally transformed how we interact with digital content.
Massive data size of point clouds presents significant challenges in data compression.
We propose an end-to-end deep learning framework that seamlessly integrates PCAC with differentiable rendering.
arXiv Detail & Related papers (2024-11-12T16:12:51Z) - Applying Plain Transformers to Real-World Point Clouds [0.0]
This work revisits the plain transformers in real-world point cloud understanding.
To close the performance gap due to the lack of inductive bias, we investigate self-supervised pre-training with masked autoencoder (MAE)
Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs.
arXiv Detail & Related papers (2023-02-28T21:06:36Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - Pix4Point: Image Pretrained Standard Transformers for 3D Point Cloud
Understanding [62.502694656615496]
We present Progressive Point Patch Embedding and present a new point cloud Transformer model namely PViT.
PViT shares the same backbone as Transformer but is shown to be less hungry for data, enabling Transformer to achieve performance comparable to the state-of-the-art.
We formulate a simple yet effective pipeline dubbed "Pix4Point" that allows harnessing Transformers pretrained in the image domain to enhance downstream point cloud understanding.
arXiv Detail & Related papers (2022-08-25T17:59:29Z) - Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point
Modeling [104.82953953453503]
We present Point-BERT, a new paradigm for learning Transformers to generalize the concept of BERT to 3D point cloud.
Experiments demonstrate that the proposed BERT-style pre-training strategy significantly improves the performance of standard point cloud Transformers.
arXiv Detail & Related papers (2021-11-29T18:59:03Z) - PU-Transformer: Point Cloud Upsampling Transformer [38.05362492645094]
We focus on the point cloud upsampling task that intends to generate dense high-fidelity point clouds from sparse input data.
Specifically, to activate the transformer's strong capability in representing features, we develop a new variant of a multi-head self-attention structure.
We demonstrate the outstanding performance of our approach by comparing with the state-of-the-art CNN-based methods on different benchmarks.
arXiv Detail & Related papers (2021-11-24T03:25:35Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - Differentiable Manifold Reconstruction for Point Cloud Denoising [23.33652755967715]
3D point clouds are often perturbed by noise due to the inherent limitation of acquisition equipments.
We propose to learn the underlying manifold of a noisy point cloud from differentiably subsampled points.
We show that our method significantly outperforms state-of-the-art denoising methods under both synthetic noise and real world noise.
arXiv Detail & Related papers (2020-07-27T13:31:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.