Learning Continuous Implicit Field with Local Distance Indicator for
Arbitrary-Scale Point Cloud Upsampling
- URL: http://arxiv.org/abs/2312.15133v1
- Date: Sat, 23 Dec 2023 01:52:14 GMT
- Title: Learning Continuous Implicit Field with Local Distance Indicator for
Arbitrary-Scale Point Cloud Upsampling
- Authors: Shujuan Li, Junsheng Zhou, Baorui Ma, Yu-Shen Liu, Zhizhong Han
- Abstract summary: Point cloud upsampling aims to generate dense and uniformly distributed point sets from a sparse point cloud.
Previous methods typically split a sparse point cloud into several local patches, upsample patch points, and merge all upsampled patches.
We propose a novel approach that learns an unsigned distance field guided by local priors for point cloud upsampling.
- Score: 55.05706827963042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud upsampling aims to generate dense and uniformly distributed point
sets from a sparse point cloud, which plays a critical role in 3D computer
vision. Previous methods typically split a sparse point cloud into several
local patches, upsample patch points, and merge all upsampled patches. However,
these methods often produce holes, outliers or nonuniformity due to the
splitting and merging process which does not maintain consistency among local
patches. To address these issues, we propose a novel approach that learns an
unsigned distance field guided by local priors for point cloud upsampling.
Specifically, we train a local distance indicator (LDI) that predicts the
unsigned distance from a query point to a local implicit surface. Utilizing the
learned LDI, we learn an unsigned distance field to represent the sparse point
cloud with patch consistency. At inference time, we randomly sample queries
around the sparse point cloud, and project these query points onto the
zero-level set of the learned implicit field to generate a dense point cloud.
We justify that the implicit field is naturally continuous, which inherently
enables the application of arbitrary-scale upsampling without necessarily
retraining for various scales. We conduct comprehensive experiments on both
synthetic data and real scans, and report state-of-the-art results under widely
used benchmarks.
Related papers
- iPUNet:Iterative Cross Field Guided Point Cloud Upsampling [20.925921503694894]
Point clouds acquired by 3D scanning devices are often sparse, noisy, and non-uniform, causing a loss of geometric features.
We present a learning-based point upsampling method, iPUNet, which generates dense and uniform points at arbitrary ratios.
We demonstrate that iPUNet is robust to handle noisy and non-uniformly distributed inputs, and outperforms state-of-the-art point cloud upsampling methods.
arXiv Detail & Related papers (2023-10-13T13:24:37Z) - Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent
with Learned Distance Functions [77.32043242988738]
We propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates.
Our method first interpolates the low-res point cloud according to a given upsampling rate.
arXiv Detail & Related papers (2023-04-24T06:36:35Z) - Density-preserving Deep Point Cloud Compression [72.0703956923403]
We propose a novel deep point cloud compression method that preserves local density information.
Our method works in an auto-encoder fashion: the encoder downsamples the points and learns point-wise features, while the decoder upsamples the points using these features.
arXiv Detail & Related papers (2022-04-27T03:42:15Z) - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors [52.25114448281418]
Current methods are able to reconstruct surfaces by learning Signed Distance Functions (SDFs) from single point clouds without ground truth signed distances or point normals.
We propose to reconstruct highly accurate surfaces from sparse point clouds with an on-surface prior.
Our method can learn SDFs from a single sparse point cloud without ground truth signed distances or point normals.
arXiv Detail & Related papers (2022-04-22T09:45:20Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - "Zero Shot" Point Cloud Upsampling [4.737519767218666]
We present an unsupervised approach to upsample point clouds internally referred as "Zero Shot" Point Cloud Upsampling (ZSPU) at holistic level.
Our approach is solely based on the internal information provided by a particular point cloud without patching in both self-training and testing phases.
ZSPU achieves superior qualitative results on shapes with complex local details or high curvatures.
arXiv Detail & Related papers (2021-06-25T17:06:18Z) - PointLIE: Locally Invertible Embedding for Point Cloud Sampling and
Recovery [35.353458457283544]
Point Cloud Sampling and Recovery (PCSR) is critical for massive real-time point cloud collection and processing.
We propose a novel Locally Invertible Embedding for point cloud adaptive sampling and recovery (PointLIE)
PointLIE unifies point cloud sampling and upsampling to one single framework through bi-directional learning.
arXiv Detail & Related papers (2021-04-30T05:55:59Z) - Self-Sampling for Neural Point Cloud Consolidation [83.31236364265403]
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud.
We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network.
We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
arXiv Detail & Related papers (2020-08-14T17:16:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.