DMF-Net: Image-Guided Point Cloud Completion with Dual-Channel Modality Fusion and Shape-Aware Upsampling Transformer
- URL: http://arxiv.org/abs/2406.17319v1
- Date: Tue, 25 Jun 2024 07:08:19 GMT
- Title: DMF-Net: Image-Guided Point Cloud Completion with Dual-Channel Modality Fusion and Shape-Aware Upsampling Transformer
- Authors: Aihua Mao, Yuxuan Tang, Jiangtao Huang, Ying He,
- Abstract summary: We propose a novel dual-channel modality fusion network for image-guided point cloud completion.
In the first stage, DMF-Net takes a partial point cloud and corresponding image as input to recover a coarse point cloud.
In the second stage, the coarse point cloud will be upsampled twice with shape-aware upsampling transformer to get the dense and complete point cloud.
- Score: 7.210417508954435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we study the task of a single-view image-guided point cloud completion. Existing methods have got promising results by fusing the information of image into point cloud explicitly or implicitly. However, given that the image has global shape information and the partial point cloud has rich local details, We believe that both modalities need to be given equal attention when performing modality fusion. To this end, we propose a novel dual-channel modality fusion network for image-guided point cloud completion(named DMF-Net), in a coarse-to-fine manner. In the first stage, DMF-Net takes a partial point cloud and corresponding image as input to recover a coarse point cloud. In the second stage, the coarse point cloud will be upsampled twice with shape-aware upsampling transformer to get the dense and complete point cloud. Extensive quantitative and qualitative experimental results show that DMF-Net outperforms the state-of-the-art unimodal and multimodal point cloud completion works on ShapeNet-ViPC dataset.
Related papers
- DC-PCN: Point Cloud Completion Network with Dual-Codebook Guided Quantization [16.57261414187004]
Point cloud completion aims to reconstruct complete 3D shapes from partial 3D point clouds.
We introduce a novel point cloud completion network, namely Dual-Codebook Point Completion Network (DC-PCN)
DC-PCN consists of an encoder-codebook and a decoder-codebook, designed to capture distinct point cloud patterns at shallow and deep levels.
arXiv Detail & Related papers (2025-01-19T06:57:45Z) - DuInNet: Dual-Modality Feature Interaction for Point Cloud Completion [33.023714580710504]
We contribute a large-scale multimodal point cloud completion benchmark ModelNet-MPC with richer shape categories and more diverse test data.
Besides the fully supervised point cloud completion task, two additional tasks including denoising completion and zero-shot learning completion are proposed.
Experiments on the ShapeNet-ViPC and ModelNet-MPC benchmarks demonstrate that DuInNet exhibits superiority, robustness and transfer ability in all completion tasks over state-of-the-art methods.
arXiv Detail & Related papers (2024-07-10T05:19:40Z) - ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors [52.72867922938023]
3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion.
We propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - Rotation-Invariant Completion Network [8.023732679237021]
Real-world point clouds usually suffer from incompleteness and display different poses.
Current point cloud completion methods excel in reproducing complete point clouds with consistent poses as seen in the training set.
We propose a network named Rotation-Invariant Completion Network (RICNet), which consists of two parts: a Dual Pipeline Completion Network (DPCNet) and an enhancing module.
arXiv Detail & Related papers (2023-08-23T07:58:20Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud
Completion [28.012936521291834]
We propose a cross-modal shape-transfer dual-refinement network (termed CSDN) for point cloud completion.
The first module transfers the intrinsic shape characteristics from single images to guide the geometry generation of missing regions.
The second module refines the coarse output by adjusting the positions of the generated points.
arXiv Detail & Related papers (2022-08-01T11:20:56Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.