Taylor3DNet: Fast 3D Shape Inference With Landmark Points Based Taylor
Series
- URL: http://arxiv.org/abs/2201.06845v2
- Date: Sun, 16 Jul 2023 09:28:11 GMT
- Title: Taylor3DNet: Fast 3D Shape Inference With Landmark Points Based Taylor
Series
- Authors: Yuting Xiao, Jiale Xu, Shenghua Gao
- Abstract summary: We propose Taylo3DNet to accelerate the inference of implicit shape representations.
Taylor3DNet exploits a set of discrete landmark points and their corresponding Taylor series coefficients to represent the implicit field of a 3D shape.
Based on this efficient representation, our Taylor3DNet achieves a significantly faster inference speed than classical network-based implicit functions.
- Score: 34.4312460015344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benefiting from the continuous representation ability, deep implicit
functions can represent a shape at infinite resolution. However, extracting
high-resolution iso-surface from an implicit function requires
forward-propagating a network with a large number of parameters for numerous
query points, thus preventing the generation speed. Inspired by the Taylor
series, we propose Taylo3DNet to accelerate the inference of implicit shape
representations. Taylor3DNet exploits a set of discrete landmark points and
their corresponding Taylor series coefficients to represent the implicit field
of a 3D shape, and the number of landmark points is independent of the
resolution of the iso-surface extraction. Once the coefficients corresponding
to the landmark points are predicted, the network evaluation for each query
point can be simplified as a low-order Taylor series calculation with several
nearest landmark points. Based on this efficient representation, our
Taylor3DNet achieves a significantly faster inference speed than classical
network-based implicit functions. We evaluate our approach on reconstruction
tasks with various input types, and the results demonstrate that our approach
can improve the inference speed by a large margin without sacrificing the
performance compared with state-of-the-art baselines.
Related papers
- NumGrad-Pull: Numerical Gradient Guided Tri-plane Representation for Surface Reconstruction from Point Clouds [41.723434094309184]
Reconstructing continuous surfaces from unoriented and unordered 3D points is a fundamental challenge in computer vision and graphics.
Recent advancements address this problem by training neural signed distance functions to pull 3D location queries to their closest points on a surface.
We introduce NumGrad-Pull, leveraging the representation capability of tri-plane structures to accelerate the learning of signed distance functions.
arXiv Detail & Related papers (2024-11-26T12:54:30Z) - MultiPull: Detailing Signed Distance Functions by Pulling Multi-Level Queries at Multi-Step [48.812388649469106]
We propose a novel method to learn multi-scale implicit fields from raw point clouds by optimizing accurate SDFs from coarse to fine.
Our experiments on widely used object and scene benchmarks demonstrate that our method outperforms the state-of-the-art methods in surface reconstruction.
arXiv Detail & Related papers (2024-11-02T10:50:22Z) - GridPull: Towards Scalability in Learning Implicit Representations from
3D Point Clouds [60.27217859189727]
We propose GridPull to improve the efficiency of learning implicit representations from large scale point clouds.
Our novelty lies in the fast inference of a discrete distance field defined on grids without using any neural components.
We use uniform grids for a fast grid search to localize sampled queries, and organize surface points in a tree structure to speed up the calculation of distances to the surface.
arXiv Detail & Related papers (2023-08-25T04:52:52Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - LatticeNet: Fast Spatio-Temporal Point Cloud Segmentation Using
Permutohedral Lattices [27.048998326468688]
Deep convolutional neural networks (CNNs) have shown outstanding performance in the task of semantically segmenting images.
Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input.
We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-08-09T10:17:27Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.