A 3D mesh convolution-based autoencoder for geometry compression
- URL: http://arxiv.org/abs/2603.02125v1
- Date: Mon, 02 Mar 2026 17:42:58 GMT
- Title: A 3D mesh convolution-based autoencoder for geometry compression
- Authors: Germain Bregeon, Marius Preda, Radu Ispas, Titus Zaharia,
- Abstract summary: We introduce a novel 3D mesh convolution-based autoencoder for geometry compression, able to deal with irregular mesh data without requiring neither preprocessing nor manifold/watertightness conditions.<n>The proposed approach extracts meaningful latent representations by learning features directly from the mesh faces, while preserving connectivity through dedicated pooling and unpooling operations.
- Score: 0.769971486557519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a novel 3D mesh convolution-based autoencoder for geometry compression, able to deal with irregular mesh data without requiring neither preprocessing nor manifold/watertightness conditions. The proposed approach extracts meaningful latent representations by learning features directly from the mesh faces, while preserving connectivity through dedicated pooling and unpooling operations. The encoder compresses the input mesh into a compact base mesh space, which ensures that the latent space remains comparable. The decoder reconstructs the original connectivity and restores the compressed geometry to its full resolution. Extensive experiments on multi-class datasets demonstrate that our method outperforms state-of-the-art approaches in both 3D mesh geometry reconstruction and latent space classification tasks. Code available at: github.com/germainGB/MeshConv3D
Related papers
- Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding [86.55824709875598]
We propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches.<n>Unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor to capture fine-grained 3D shape details.<n>We employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations.
arXiv Detail & Related papers (2026-01-05T18:33:50Z) - Hierarchical Neural Surfaces for 3D Mesh Compression [0.0]
Implicit Neural Representations (INRs) have been demonstrated to achieve state-of-the-art compression of a broad range of modalities such as images, videos, 3D surfaces, and audio.<n>Most studies have focused on building neural counterparts of traditional implicit representations of 3D geometries, such as signed distance functions.
arXiv Detail & Related papers (2025-12-17T21:32:04Z) - DiMeR: Disentangled Mesh Reconstruction Model [29.827345186012558]
DiMeR is a novel geometry-texture disentangled feed-forward model with 3D supervision for sparse-view mesh reconstruction.<n>We streamline the algorithm of mesh extraction by eliminating modules with low performance/cost ratios and redesigning regularization losses with 3D supervision.<n>Extensive experiments demonstrate that DiMeR generalises across sparse-view-, single-image-, and text-to-3D tasks, consistently outperforming baselines.
arXiv Detail & Related papers (2025-04-24T15:39:20Z) - Mesh Compression with Quantized Neural Displacement Fields [31.316999947745614]
Implicit neural representations (INRs) have been successfully used to compress a variety of 3D surface representations.<n>This work presents a simple yet effective method that extends the usage of INRs to compress 3D triangle meshes.<n>We show that our method is capable of preserving intricate geometric textures and demonstrates state-of-the-art performance for compression ratios ranging from 4x to 380x.
arXiv Detail & Related papers (2025-03-28T13:35:32Z) - Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh Deformation [2.0914328542137346]
Existing 3D model compression methods primarily focus on static models and do not consider inter-frame information.
This paper proposes a method to compress mesh sequences with arbitrary topology using temporal correspondence and mesh deformation.
arXiv Detail & Related papers (2024-09-08T16:34:19Z) - Pixel-Aligned Non-parametric Hand Mesh Reconstruction [16.62199923065314]
Non-parametric mesh reconstruction has recently shown significant progress in 3D hand and body applications.
In this paper, we seek to establish and exploit this mapping with a simple and compact architecture.
We propose an end-to-end pipeline for hand mesh recovery tasks which consists of three phases.
arXiv Detail & Related papers (2022-10-17T15:53:18Z) - Monocular 3D Object Reconstruction with GAN Inversion [122.96094885939146]
MeshInversion is a novel framework to improve the reconstruction of textured 3D meshes.
It exploits the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis.
Our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts.
arXiv Detail & Related papers (2022-07-20T17:47:22Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - Mesh Convolution with Continuous Filters for 3D Surface Parsing [101.25796935464648]
We propose a series of modular operations for effective geometric feature learning from 3D triangle meshes.
Our mesh convolutions exploit spherical harmonics as orthonormal bases to create continuous convolutional filters.
We further contribute a novel hierarchical neural network for perceptual parsing of 3D surfaces, named PicassoNet++.
arXiv Detail & Related papers (2021-12-03T09:16:49Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.