Hierarchical Neural Surfaces for 3D Mesh Compression
- URL: http://arxiv.org/abs/2512.15985v1
- Date: Wed, 17 Dec 2025 21:32:04 GMT
- Title: Hierarchical Neural Surfaces for 3D Mesh Compression
- Authors: Sai Karthikey Pentapati, Gregoire Phillips, Alan Bovik,
- Abstract summary: Implicit Neural Representations (INRs) have been demonstrated to achieve state-of-the-art compression of a broad range of modalities such as images, videos, 3D surfaces, and audio.<n>Most studies have focused on building neural counterparts of traditional implicit representations of 3D geometries, such as signed distance functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit Neural Representations (INRs) have been demonstrated to achieve state-of-the-art compression of a broad range of modalities such as images, videos, 3D surfaces, and audio. Most studies have focused on building neural counterparts of traditional implicit representations of 3D geometries, such as signed distance functions. However, the triangle mesh-based representation of geometry remains the most widely used representation in the industry, while building INRs capable of generating them has been sparsely studied. In this paper, we present a method for building compact INRs of zero-genus 3D manifolds. Our method relies on creating a spherical parameterization of a given 3D mesh - mapping the surface of a mesh to that of a unit sphere - then constructing an INR that encodes the displacement vector field defined continuously on its surface that regenerates the original shape. The compactness of our representation can be attributed to its hierarchical structure, wherein it first recovers the coarse structure of the encoded surface before adding high-frequency details to it. Once the INR is computed, 3D meshes of arbitrary resolution/connectivity can be decoded from it. The decoding can be performed in real time while achieving a state-of-the-art trade-off between reconstruction quality and the size of the compressed representations.
Related papers
- A 3D mesh convolution-based autoencoder for geometry compression [0.769971486557519]
We introduce a novel 3D mesh convolution-based autoencoder for geometry compression, able to deal with irregular mesh data without requiring neither preprocessing nor manifold/watertightness conditions.<n>The proposed approach extracts meaningful latent representations by learning features directly from the mesh faces, while preserving connectivity through dedicated pooling and unpooling operations.
arXiv Detail & Related papers (2026-03-02T17:42:58Z) - Mesh Compression with Quantized Neural Displacement Fields [31.316999947745614]
Implicit neural representations (INRs) have been successfully used to compress a variety of 3D surface representations.<n>This work presents a simple yet effective method that extends the usage of INRs to compress 3D triangle meshes.<n>We show that our method is capable of preserving intricate geometric textures and demonstrates state-of-the-art performance for compression ratios ranging from 4x to 380x.
arXiv Detail & Related papers (2025-03-28T13:35:32Z) - Optimizing 3D Geometry Reconstruction from Implicit Neural Representations [2.3940819037450987]
Implicit neural representations have emerged as a powerful tool in learning 3D geometry.
We present a novel approach that both reduces computational expenses and enhances the capture of fine details.
arXiv Detail & Related papers (2024-10-16T16:36:23Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Reconstructing Topology-Consistent Face Mesh by Volume Rendering from Multi-View Images [71.20113392204183]
Industrial 3D face assets creation typically reconstructs topology-consistent face meshes from multi-view images for downstream production.<n>NeRF has shown great advantages in 3D reconstruction, by representing scenes as density and radiance fields.<n>We introduce a novel method which combines explicit mesh with neural volume rendering to optimize geometry of an artist-made template face mesh from multi-view images.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - Learning Neural Implicit Representations with Surface Signal
Parameterizations [14.835882967340968]
We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
arXiv Detail & Related papers (2022-11-01T15:10:58Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.