3D Compression Using Neural Fields
- URL: http://arxiv.org/abs/2311.13009v1
- Date: Tue, 21 Nov 2023 21:36:09 GMT
- Title: 3D Compression Using Neural Fields
- Authors: Janis Postels, Yannick Str\"umpler, Klara Reichard, Luc Van Gool,
Federico Tombari
- Abstract summary: We propose a novel NF-based compression algorithm for 3D data.
We demonstrate that our method excels at geometry compression on 3D point clouds as well as meshes.
It is straightforward to extend our compression algorithm to compress both geometry and attribute (e.g. color) of 3D data.
- Score: 90.24458390334203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Fields (NFs) have gained momentum as a tool for compressing various
data modalities - e.g. images and videos. This work leverages previous advances
and proposes a novel NF-based compression algorithm for 3D data. We derive two
versions of our approach - one tailored to watertight shapes based on Signed
Distance Fields (SDFs) and, more generally, one for arbitrary non-watertight
shapes using Unsigned Distance Fields (UDFs). We demonstrate that our method
excels at geometry compression on 3D point clouds as well as meshes. Moreover,
we show that, due to the NF formulation, it is straightforward to extend our
compression algorithm to compress both geometry and attribute (e.g. color) of
3D data.
Related papers
- GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D Generation [75.39457097832113]
This paper introduces a novel 3D generation framework, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space.
Our framework employs a Variational Autoencoder with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information.
The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single/multi-view image inputs.
arXiv Detail & Related papers (2024-11-12T18:59:32Z) - Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - UDiFF: Generating Conditional Unsigned Distance Fields with Optimal Wavelet Diffusion [51.31220416754788]
We present UDiFF, a 3D diffusion model for unsigned distance fields (UDFs) which is capable to generate textured 3D shapes with open surfaces from text conditions or unconditionally.
Our key idea is to generate UDFs in spatial-frequency domain with an optimal wavelet transformation, which produces a compact representation space for UDF generation.
arXiv Detail & Related papers (2024-04-10T09:24:54Z) - 3D Point Cloud Compression with Recurrent Neural Network and Image
Compression Methods [0.0]
Storing and transmitting LiDAR point cloud data is essential for many AV applications.
Due to the sparsity and unordered structure of the data, it is difficult to compress point cloud data to a low volume.
We propose a new 3D-to-2D transformation which allows compression algorithms to efficiently exploit spatial correlations.
arXiv Detail & Related papers (2024-02-18T19:08:19Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression [16.98171403698783]
We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
arXiv Detail & Related papers (2022-09-09T17:09:02Z) - T4DT: Tensorizing Time for Learning Temporal 3D Visual Data [19.418308324435916]
We show that low-rank tensor compression is extremely compact to store and query time-varying signed distance functions.
Unlike existing iterative learning-based approaches like DeepSDF and NeRF, our method uses a closed-form algorithm with theoretical guarantees.
arXiv Detail & Related papers (2022-08-02T12:57:08Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.