VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams
- URL: http://arxiv.org/abs/2308.14616v1
- Date: Mon, 28 Aug 2023 14:35:58 GMT
- Title: VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams
- Authors: Nissim Maruani, Roman Klokov, Maks Ovsjanikov, Pierre Alliez, Mathieu
Desbrun
- Abstract summary: VoroMesh is a novel and differentiable Voronoi-based representation of watertight 3D shape surfaces.
To learn the position of the generators, we propose a novel loss function, dubbed VoroLoss.
A direct optimization of the Voroloss to obtain generators on the Thingi32 dataset demonstrates the geometric efficiency of our representation.
- Score: 34.71121458068556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In stark contrast to the case of images, finding a concise, learnable
discrete representation of 3D surfaces remains a challenge. In particular,
while polygon meshes are arguably the most common surface representation used
in geometry processing, their irregular and combinatorial structure often make
them unsuitable for learning-based applications. In this work, we present
VoroMesh, a novel and differentiable Voronoi-based representation of watertight
3D shape surfaces. From a set of 3D points (called generators) and their
associated occupancy, we define our boundary representation through the Voronoi
diagram of the generators as the subset of Voronoi faces whose two associated
(equidistant) generators are of opposite occupancy: the resulting polygon mesh
forms a watertight approximation of the target shape's boundary. To learn the
position of the generators, we propose a novel loss function, dubbed VoroLoss,
that minimizes the distance from ground truth surface samples to the closest
faces of the Voronoi diagram which does not require an explicit construction of
the entire Voronoi diagram. A direct optimization of the Voroloss to obtain
generators on the Thingi32 dataset demonstrates the geometric efficiency of our
representation compared to axiomatic meshing algorithms and recent
learning-based mesh representations. We further use VoroMesh in a
learning-based mesh prediction task from input SDF grids on the ABC dataset,
and show comparable performance to state-of-the-art methods while guaranteeing
closed output surfaces free of self-intersections.
Related papers
- MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction [28.452920446301608]
We present MILo, a novel framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians.<n>Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods.
arXiv Detail & Related papers (2025-06-30T17:48:54Z) - LineGS : 3D Line Segment Representation on 3D Gaussian Splatting [0.0]
LineGS is a novel method that combines geometry-guided 3D line reconstruction with a 3D Gaussian splatting model.
The results show significant improvements in both geometric accuracy and model compactness compared to baseline methods.
arXiv Detail & Related papers (2024-11-30T13:29:36Z) - MonoGSDF: Exploring Monocular Geometric Cues for Gaussian Splatting-Guided Implicit Surface Reconstruction [84.07233691641193]
We introduce MonoGSDF, a novel method that couples primitives with a neural Signed Distance Field (SDF) for high-quality reconstruction.
To handle arbitrary-scale scenes, we propose a scaling strategy for robust generalization.
Experiments on real-world datasets outperforms prior methods while maintaining efficiency.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - GeoGen: Geometry-Aware Generative Modeling via Signed Distance Functions [22.077366472693395]
We introduce a new generative approach for synthesizing 3D geometry and images from single-view collections.
By employing volumetric rendering using neural radiance fields, they inherit a key limitation: the generated geometry is noisy and unconstrained.
We propose GeoGen, a new SDF-based 3D generative model trained in an end-to-end manner.
arXiv Detail & Related papers (2024-06-06T17:00:10Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.