Sparse SPN: Depth Completion from Sparse Keypoints
- URL: http://arxiv.org/abs/2212.00987v1
- Date: Fri, 2 Dec 2022 05:45:04 GMT
- Title: Sparse SPN: Depth Completion from Sparse Keypoints
- Authors: Yuqun Wu, Jae Yong Lee, Derek Hoiem
- Abstract summary: Long term goal is to use image-based depth completion to create 3D models from sparse point clouds.
We extend CSPN with multiscale prediction and a dilated kernel, leading to better completion of keypoint-sampled depth.
We also show that a model trained on NYUv2 creates surprisingly good point clouds on ETH3D by completing sparse SfM points.
- Score: 17.26885039864854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our long term goal is to use image-based depth completion to quickly create
3D models from sparse point clouds, e.g. from SfM or SLAM. Much progress has
been made in depth completion. However, most current works assume well
distributed samples of known depth, e.g. Lidar or random uniform sampling, and
perform poorly on uneven samples, such as from keypoints, due to the large
unsampled regions. To address this problem, we extend CSPN with multiscale
prediction and a dilated kernel, leading to much better completion of
keypoint-sampled depth. We also show that a model trained on NYUv2 creates
surprisingly good point clouds on ETH3D by completing sparse SfM points.
Related papers
- DepthLab: From Partial to Complete [80.58276388743306]
Missing values remain a common challenge for depth data across its wide range of applications.
This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors.
Our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion.
arXiv Detail & Related papers (2024-12-24T04:16:38Z) - LoopSparseGS: Loop Based Sparse-View Friendly Gaussian Splatting [18.682864169561498]
LoopSparseGS is a loop-based 3DGS framework for the sparse novel view synthesis task.
We introduce a novel Sparse-friendly Sampling (SFS) strategy to handle oversized Gaussian ellipsoids leading to large pixel errors.
Experiments on four datasets demonstrate that LoopSparseGS outperforms existing state-of-the-art methods for sparse-input novel view synthesis.
arXiv Detail & Related papers (2024-08-01T03:26:50Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors [52.72867922938023]
3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion.
We propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - NeRF-Det++: Incorporating Semantic Cues and Perspective-aware Depth
Supervision for Indoor Multi-View 3D Detection [72.0098999512727]
NeRF-Det has achieved impressive performance in indoor multi-view 3D detection by utilizing NeRF to enhance representation learning.
We present three corresponding solutions, including semantic enhancement, perspective-aware sampling, and ordinal depth supervision.
The resulting algorithm, NeRF-Det++, has exhibited appealing performance in the ScanNetV2 and AR KITScenes datasets.
arXiv Detail & Related papers (2024-02-22T11:48:06Z) - Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [85.08249413137558]
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors.
Small, distant, and incomplete objects with sparse or few points are often hard to detect.
We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space.
arXiv Detail & Related papers (2022-11-23T16:01:06Z) - SparseFormer: Attention-based Depth Completion Network [2.9434930072968584]
We introduce a transformer block, SparseFormer, that fuses 3D landmarks with deep visual features to produce dense depth.
The SparseFormer has a global receptive field, making the module especially effective for depth completion with low-density and non-uniform landmarks.
To address the issue of depth outliers among the 3D landmarks, we introduce a trainable refinement module that filters outliers through attention between the sparse landmarks.
arXiv Detail & Related papers (2022-06-09T15:08:24Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - DELTAS: Depth Estimation by Learning Triangulation And densification of
Sparse points [14.254472131009653]
Multi-view stereo (MVS) is the golden mean between the accuracy of active depth sensing and the practicality of monocular depth estimation.
Cost volume based approaches employing 3D convolutional neural networks (CNNs) have considerably improved the accuracy of MVS systems.
We propose an efficient depth estimation approach by first (a) detecting and evaluating descriptors for interest points, then (b) learning to match and triangulate a small set of interest points, and finally (c) densifying this sparse set of 3D points using CNNs.
arXiv Detail & Related papers (2020-03-19T17:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.