MeshFeat: Multi-Resolution Features for Neural Fields on Meshes
- URL: http://arxiv.org/abs/2407.13592v1
- Date: Thu, 18 Jul 2024 15:29:48 GMT
- Title: MeshFeat: Multi-Resolution Features for Neural Fields on Meshes
- Authors: Mihir Mahajan, Florian Hofherr, Daniel Cremers,
- Abstract summary: Parametric feature grid encodings have gained significant attention as an encoding approach for neural fields.
We propose MeshFeat, a parametric feature encoding tailored to meshes, for which we adapt the idea of multi-resolution feature grids from Euclidean space.
We show a significant speed-up compared to previous representations while maintaining comparable reconstruction quality for texture reconstruction and BRDF representation.
- Score: 38.93284476165776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parametric feature grid encodings have gained significant attention as an encoding approach for neural fields since they allow for much smaller MLPs, which significantly decreases the inference time of the models. In this work, we propose MeshFeat, a parametric feature encoding tailored to meshes, for which we adapt the idea of multi-resolution feature grids from Euclidean space. We start from the structure provided by the given vertex topology and use a mesh simplification algorithm to construct a multi-resolution feature representation directly on the mesh. The approach allows the usage of small MLPs for neural fields on meshes, and we show a significant speed-up compared to previous representations while maintaining comparable reconstruction quality for texture reconstruction and BRDF representation. Given its intrinsic coupling to the vertices, the method is particularly well-suited for representations on deforming meshes, making it a good fit for object animation.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Lagrangian Hashing for Compressed Neural Field Representations [31.23145728062387]
We present Lagrangian Hashing, a representation for neural fields combining the characteristics of fast training NeRF methods.
Our main finding is that our representation allows the reconstruction of signals using a more compact representation without compromising quality.
arXiv Detail & Related papers (2024-09-09T05:25:15Z) - Message-Passing Monte Carlo: Generating low-discrepancy point sets via Graph Neural Networks [64.39488944424095]
We present the first machine learning approach to generate low-discrepancy point sets named Message-Passing Monte Carlo points.
MPMC points are empirically shown to be either optimal or near-optimal with respect to the discrepancy for low dimension and small number of points.
arXiv Detail & Related papers (2024-05-23T21:17:20Z) - Efficient Encoding of Graphics Primitives with Simplex-based Structures [0.8158530638728501]
We propose a simplex-based approach for encoding graphics primitives.
In the 2D image fitting task, the proposed method is capable of fitting an image with 9.4% less time compared to the baseline method.
arXiv Detail & Related papers (2023-11-26T21:53:22Z) - SHACIRA: Scalable HAsh-grid Compression for Implicit Neural
Representations [46.01969382873856]
Implicit Neural Representations (INR) or neural fields have emerged as a popular framework to encode multimedia signals.
We propose SHACIRA, a framework for compressing such feature grids with no additional post-hoc pruning/quantization stages.
Our approach outperforms existing INR approaches without the need for any large datasets or domain-specifics.
arXiv Detail & Related papers (2023-09-27T17:59:48Z) - Hybrid Mesh-neural Representation for 3D Transparent Object
Reconstruction [30.66452291775852]
We propose a novel method to reconstruct the 3D shapes of transparent objects using hand-held captured images under natural light conditions.
It combines the advantage of explicit mesh and multi-layer perceptron (MLP) network, a hybrid representation, to simplify the capture used in recent contributions.
arXiv Detail & Related papers (2022-03-23T17:58:56Z) - Progressive Encoding for Neural Optimization [92.55503085245304]
We show the competence of the PPE layer for mesh transfer and its advantages compared to contemporary surface mapping techniques.
Most importantly, our technique is a parameterization-free method, and thus applicable to a variety of target shape representations.
arXiv Detail & Related papers (2021-04-19T08:22:55Z) - 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment
Feedback Loop [128.07841893637337]
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences.
We propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters.
arXiv Detail & Related papers (2021-03-30T17:07:49Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.