RealDiff: Real-world 3D Shape Completion using Self-Supervised Diffusion Models
- URL: http://arxiv.org/abs/2409.10180v1
- Date: Mon, 16 Sep 2024 11:18:57 GMT
- Title: RealDiff: Real-world 3D Shape Completion using Self-Supervised Diffusion Models
- Authors: Başak Melis Öcal, Maxim Tatarchenko, Sezer Karaoglu, Theo Gevers,
- Abstract summary: We propose a self-supervised framework, namely RealDiff, that formulates point cloud completion as a conditional generation problem directly on real-world measurements.
Specifically, RealDiff simulates a diffusion process at the missing object parts while conditioning the generation on the partial input to address the multimodal nature of the task.
Experimental results show that our method consistently outperforms state-of-the-art methods in real-world point cloud completion.
- Score: 15.209079637302905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud completion aims to recover the complete 3D shape of an object from partial observations. While approaches relying on synthetic shape priors achieved promising results in this domain, their applicability and generalizability to real-world data are still limited. To tackle this problem, we propose a self-supervised framework, namely RealDiff, that formulates point cloud completion as a conditional generation problem directly on real-world measurements. To better deal with noisy observations without resorting to training on synthetic data, we leverage additional geometric cues. Specifically, RealDiff simulates a diffusion process at the missing object parts while conditioning the generation on the partial input to address the multimodal nature of the task. We further regularize the training by matching object silhouettes and depth maps, predicted by our method, with the externally estimated ones. Experimental results show that our method consistently outperforms state-of-the-art methods in real-world point cloud completion.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - DiffComplete: Diffusion-based Generative 3D Shape Completion [114.43353365917015]
We introduce a new diffusion-based approach for shape completion on 3D range scans.
We strike a balance between realism, multi-modality, and high fidelity.
DiffComplete sets a new SOTA performance on two large-scale 3D shape completion benchmarks.
arXiv Detail & Related papers (2023-06-28T16:07:36Z) - Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching [15.050801537501462]
We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
arXiv Detail & Related papers (2023-03-20T09:47:02Z) - Implicit Shape Completion via Adversarial Shape Priors [46.48590354256945]
We present a novel neural implicit shape method for partial point cloud completion.
We combine a conditional Deep-SDF architecture with learned, adversarial shape priors.
We train a PointNet++ discriminator that impels the generator to produce plausible, globally consistent reconstructions.
arXiv Detail & Related papers (2022-04-21T12:49:59Z) - Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point
Clouds for Closing Domain Gap [34.590531549797355]
We propose an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models.
Experiment results can verify the effectiveness of our method as well as both of its modules for unsupervised domain adaptation on point cloud classification.
arXiv Detail & Related papers (2022-03-08T03:44:49Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.