Spotlights: Probing Shapes from Spherical Viewpoints
- URL: http://arxiv.org/abs/2205.12564v1
- Date: Wed, 25 May 2022 08:23:18 GMT
- Title: Spotlights: Probing Shapes from Spherical Viewpoints
- Authors: Jiaxin Wei, Lige Liu, Ran Cheng, Wenqing Jiang, Minghao Xu, Xinyu
Jiang, Tao Sun, Soren Schwertfeger, Laurent Kneip
- Abstract summary: We propose a novel sampling model called Spotlights to represent a 3D shape as a compact 1D array of depth values.
It simulates the configuration of cameras evenly distributed on a sphere, where each virtual camera casts light rays from its principal point through sample points on a small concentric spherical cap to probe for the possible intersections with the object surrounded by the sphere.
- Score: 25.824284796437652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the surge of learned representations that
directly build upon point clouds. Though becoming increasingly expressive, most
existing representations still struggle to generate ordered point sets.
Inspired by spherical multi-view scanners, we propose a novel sampling model
called Spotlights to represent a 3D shape as a compact 1D array of depth
values. It simulates the configuration of cameras evenly distributed on a
sphere, where each virtual camera casts light rays from its principal point
through sample points on a small concentric spherical cap to probe for the
possible intersections with the object surrounded by the sphere. The structured
point cloud is hence given implicitly as a function of depths. We provide a
detailed geometric analysis of this new sampling scheme and prove its
effectiveness in the context of the point cloud completion task. Experimental
results on both synthetic and real data demonstrate that our method achieves
competitive accuracy and consistency while having a significantly reduced
computational cost. Furthermore, we show superior performance on the downstream
point cloud registration task over state-of-the-art completion methods.
Related papers
- PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation [24.751481680565803]
We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
arXiv Detail & Related papers (2023-11-03T08:19:47Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Shape Completion with Points in the Shadow [13.608498759468024]
Single-view point cloud completion aims to recover the full geometry of an object based on only limited observation.
Inspired by the classic shadow volume technique in computer graphics, we propose a new method to reduce the solution space effectively.
arXiv Detail & Related papers (2022-09-17T14:58:56Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Concentric Spherical GNN for 3D Representation Learning [53.45704095146161]
We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps.
Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information.
We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.
arXiv Detail & Related papers (2021-03-18T19:05:04Z) - Semantic Segmentation for Real Point Cloud Scenes via Bilateral
Augmentation and Adaptive Fusion [38.05362492645094]
Real point cloud scenes can intuitively capture complex surroundings in the real world, but due to 3D data's raw nature, it is very challenging for machine perception.
We concentrate on the essential visual task, semantic segmentation, for large-scale point cloud data collected in reality.
By comparing with state-of-the-art networks on three different benchmarks, we demonstrate the effectiveness of our network.
arXiv Detail & Related papers (2021-03-12T04:13:20Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction [19.535169371240073]
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from point clouds.
We decouple the instance reconstruction into global object localization and local shape prediction.
Our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.
arXiv Detail & Related papers (2020-11-30T12:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.