High-fidelity 3D Model Compression based on Key Spheres
- URL: http://arxiv.org/abs/2201.07486v2
- Date: Thu, 20 Jan 2022 04:36:35 GMT
- Title: High-fidelity 3D Model Compression based on Key Spheres
- Authors: Yuanzhan Li, Yuqi Liu, Yujie Lu, Siyu Zhang, Shen Cai and Yanting
Zhang
- Abstract summary: We propose an SDF prediction network using explicit key spheres as input.
Our method achieves the high-fidelity and high-compression 3D object coding and reconstruction.
- Score: 6.59007277780362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, neural signed distance function (SDF) has become one of the
most effective representation methods for 3D models. By learning continuous
SDFs in 3D space, neural networks can predict the distance from a given query
space point to its closest object surface,whose positive and negative signs
denote inside and outside of the object, respectively. Training a specific
network for each 3D model, which individually embeds its shape, can realize
compressed representation of objects by storing fewer network (and possibly
latent) parameters. Consequently, reconstruction through network inference and
surface recovery can be achieved. In this paper, we propose an SDF prediction
network using explicit key spheres as input. Key spheres are extracted from the
internal space of objects, whose centers either have relatively larger SDF
values (sphere radii), or are located at essential positions. By inputting the
spatial information of multiple spheres which imply different local shapes, the
proposed method can significantly improve the reconstruction accuracy with a
negligible storage cost. Compared to previous works, our method achieves the
high-fidelity and high-compression 3D object coding and reconstruction.
Experiments conducted on three datasets verify the superior performance of our
method.
Related papers
- Learning Unsigned Distance Fields from Local Shape Functions for 3D Surface Reconstruction [42.840655419509346]
This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs.
We observe that 3D shapes manifest simple patterns within localized areas, prompting us to create a training dataset of point cloud patches.
Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation.
arXiv Detail & Related papers (2024-07-01T14:39:03Z) - ClusteringSDF: Self-Organized Neural Implicit Surfaces for 3D Decomposition [32.99080359375706]
ClusteringSDF is a novel approach to achieve both segmentation and reconstruction in 3D via the neural implicit surface representation.
We introduce a high-efficient clustering mechanism for lifting the 2D labels to 3D and the experimental results on the challenging scenes from ScanNet and Replica datasets show that ClusteringSDF can achieve competitive performance.
arXiv Detail & Related papers (2024-03-21T17:59:16Z) - DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - 3D Shapes Local Geometry Codes Learning with SDF [8.37542758486152]
A signed distance function (SDF) as the 3D shape description is one of the most effective approaches to represent 3D geometry for rendering and reconstruction.
In this paper, we consider the degeneration problem of reconstruction coming from the capacity decrease of the DeepSDF model.
We propose Local Geometry Code Learning (LGCL), a model that improves the original DeepSDF results by learning from a local shape geometry.
arXiv Detail & Related papers (2021-08-19T09:56:03Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Generative Sparse Detection Networks for 3D Single-shot Object Detection [43.91336826079574]
3D object detection has been widely studied due to its potential applicability to many promising areas such as robotics and augmented reality.
Yet, the sparse nature of the 3D data poses unique challenges to this task.
We propose Generative Sparse Detection Network (GSDN), a fully-convolutional single-shot sparse detection network.
arXiv Detail & Related papers (2020-06-22T15:54:24Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.