Learning A Locally Unified 3D Point Cloud for View Synthesis
- URL: http://arxiv.org/abs/2209.05013v3
- Date: Sat, 30 Sep 2023 13:11:29 GMT
- Title: Learning A Locally Unified 3D Point Cloud for View Synthesis
- Authors: Meng You, Mantang Guo, Xianqiang Lyu, Hui Liu, and Junhui Hou
- Abstract summary: We propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views.
Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB.
- Score: 45.757280092357355
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we explore the problem of 3D point cloud representation-based
view synthesis from a set of sparse source views. To tackle this challenging
problem, we propose a new deep learning-based view synthesis paradigm that
learns a locally unified 3D point cloud from source views. Specifically, we
first construct sub-point clouds by projecting source views to 3D space based
on their depth maps. Then, we learn the locally unified 3D point cloud by
adaptively fusing points at a local neighborhood defined on the union of the
sub-point clouds. Besides, we also propose a 3D geometry-guided image
restoration module to fill the holes and recover high-frequency details of the
rendered novel views. Experimental results on three benchmark datasets
demonstrate that our method can improve the average PSNR by more than 4 dB
while preserving more accurate visual details, compared with state-of-the-art
view synthesis methods.
Related papers
- PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching [10.5792547614413]
We propose a novel online, point-based 3D reconstruction method from posed monocular RGB videos.
Our model maintains a global point cloud representation of the scene, continuously updating the features and 3D locations of points as new images are observed.
Experiments on the ScanNet dataset show that our method achieves comparable quality among online MVS approaches.
arXiv Detail & Related papers (2024-10-30T17:29:25Z) - Few-shot point cloud reconstruction and denoising via learned Guassian splats renderings and fine-tuned diffusion features [52.62053703535824]
We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering.
To improve reconstruction in constraint settings, we regularize the training of a differentiable with hybrid surface and appearance.
We demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision.
arXiv Detail & Related papers (2024-04-01T13:38:16Z) - Leveraging Monocular Disparity Estimation for Single-View Reconstruction [8.583436410810203]
We leverage advances in monocular depth estimation to obtain disparity maps.
We transform 2D normalized disparity maps into 3D point clouds by solving an optimization on the relevant camera parameters.
arXiv Detail & Related papers (2022-07-01T03:05:40Z) - Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding [80.04281842702294]
We introduce the concept of the multi-view point cloud (Voint cloud) representing each 3D point as a set of features extracted from several view-points.
This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation.
We deploy a Voint neural network (VointNet) with a theoretically established functional form to learn representations in the Voint space.
arXiv Detail & Related papers (2021-11-30T13:08:19Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - PnP-3D: A Plug-and-Play for 3D Point Clouds [38.05362492645094]
We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
arXiv Detail & Related papers (2021-08-16T23:59:43Z) - From Multi-View to Hollow-3D: Hallucinated Hollow-3D R-CNN for 3D Object
Detection [101.20784125067559]
We propose a new architecture, namely Hallucinated Hollow-3D R-CNN, to address the problem of 3D object detection.
In our approach, we first extract the multi-view features by sequentially projecting the point clouds into the perspective view and the bird-eye view.
The 3D objects are detected via a box refinement module with a novel Hierarchical Voxel RoI Pooling operation.
arXiv Detail & Related papers (2021-07-30T02:00:06Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.