Diffusion-Occ: 3D Point Cloud Completion via Occupancy Diffusion
- URL: http://arxiv.org/abs/2408.14846v2
- Date: Mon, 9 Sep 2024 06:50:18 GMT
- Title: Diffusion-Occ: 3D Point Cloud Completion via Occupancy Diffusion
- Authors: Guoqing Zhang, Jian Liu,
- Abstract summary: We introduce textbfDiffusion-Occ, a novel framework for Diffusion Point Cloud Completion.
By thresholding the occupancy field, we convert it into a complete point cloud.
Experimental results demonstrate that Diffusion-Occ outperforms existing discriminative and generative methods.
- Score: 5.189790379672664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds are crucial for capturing three-dimensional data but often suffer from incompleteness due to limitations such as resolution and occlusion. Traditional methods typically rely on point-based approaches within discriminative frameworks for point cloud completion. In this paper, we introduce \textbf{Diffusion-Occ}, a novel framework for Diffusion Point Cloud Completion. Diffusion-Occ utilizes a two-stage coarse-to-fine approach. In the first stage, the Coarse Density Voxel Prediction Network (CDNet) processes partial points to predict coarse density voxels, streamlining global feature extraction through voxel classification, as opposed to previous regression-based methods. In the second stage, we introduce the Occupancy Generation Network (OccGen), a conditional occupancy diffusion model based on a transformer architecture and enhanced by our Point-Voxel Fuse (PVF) block. This block integrates coarse density voxels with partial points to leverage both global and local features for comprehensive completion. By thresholding the occupancy field, we convert it into a complete point cloud. Additionally, our method employs diverse training mixtures and efficient diffusion parameterization to enable effective one-step sampling during both training and inference. Experimental results demonstrate that Diffusion-Occ outperforms existing discriminative and generative methods.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - Zero-shot Point Cloud Completion Via 2D Priors [52.72867922938023]
3D point cloud completion is designed to recover complete shapes from partially observed point clouds.
We propose a zero-shot framework aimed at completing partially observed point clouds across any unseen categories.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - Enhancing Diffusion-based Point Cloud Generation with Smoothness Constraint [5.140589325829964]
Diffusion models have been popular for point cloud generation tasks.
We propose incorporating the local smoothness constraint into the diffusion framework for point cloud generation.
Experiments demonstrate the proposed model can generate realistic shapes and smoother point clouds, outperforming multiple state-of-the-art methods.
arXiv Detail & Related papers (2024-04-03T01:55:15Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - CCD-3DR: Consistent Conditioning in Diffusion for Single-Image 3D
Reconstruction [81.98244738773766]
We present CCD-3DR, which exploits a novel centered diffusion probabilistic model for consistent local feature conditioning.
CCD-3DR outperforms all competitors by a large margin, with over 40% improvement.
arXiv Detail & Related papers (2023-08-15T15:27:42Z) - HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion [15.94976936555104]
We propose a cross-source point cloud fusion algorithm called HybridFusion.
It can register cross-source dense point clouds from different viewing angle in outdoor large scenes.
The proposed approach is evaluated comprehensively through qualitative and quantitative experiments.
arXiv Detail & Related papers (2023-04-10T10:54:54Z) - Generative Modeling with Flow-Guided Density Ratio Learning [12.192867460641835]
Flow-Guided Density Ratio Learning (FDRL) is a simple and scalable approach to generative modeling.
We show that FDRL can generate images of dimensions as high as $128times128$, as well as outperform existing gradient flow baselines on quantitative benchmarks.
arXiv Detail & Related papers (2023-03-07T07:55:52Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.