VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams
- URL: http://arxiv.org/abs/2308.14616v1
- Date: Mon, 28 Aug 2023 14:35:58 GMT
- Title: VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams
- Authors: Nissim Maruani, Roman Klokov, Maks Ovsjanikov, Pierre Alliez, Mathieu
Desbrun
- Abstract summary: VoroMesh is a novel and differentiable Voronoi-based representation of watertight 3D shape surfaces.
To learn the position of the generators, we propose a novel loss function, dubbed VoroLoss.
A direct optimization of the Voroloss to obtain generators on the Thingi32 dataset demonstrates the geometric efficiency of our representation.
- Score: 34.71121458068556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In stark contrast to the case of images, finding a concise, learnable
discrete representation of 3D surfaces remains a challenge. In particular,
while polygon meshes are arguably the most common surface representation used
in geometry processing, their irregular and combinatorial structure often make
them unsuitable for learning-based applications. In this work, we present
VoroMesh, a novel and differentiable Voronoi-based representation of watertight
3D shape surfaces. From a set of 3D points (called generators) and their
associated occupancy, we define our boundary representation through the Voronoi
diagram of the generators as the subset of Voronoi faces whose two associated
(equidistant) generators are of opposite occupancy: the resulting polygon mesh
forms a watertight approximation of the target shape's boundary. To learn the
position of the generators, we propose a novel loss function, dubbed VoroLoss,
that minimizes the distance from ground truth surface samples to the closest
faces of the Voronoi diagram which does not require an explicit construction of
the entire Voronoi diagram. A direct optimization of the Voroloss to obtain
generators on the Thingi32 dataset demonstrates the geometric efficiency of our
representation compared to axiomatic meshing algorithms and recent
learning-based mesh representations. We further use VoroMesh in a
learning-based mesh prediction task from input SDF grids on the ABC dataset,
and show comparable performance to state-of-the-art methods while guaranteeing
closed output surfaces free of self-intersections.
Related papers
- GeoGen: Geometry-Aware Generative Modeling via Signed Distance Functions [22.077366472693395]
We introduce a new generative approach for synthesizing 3D geometry and images from single-view collections.
By employing volumetric rendering using neural radiance fields, they inherit a key limitation: the generated geometry is noisy and unconstrained.
We propose GeoGen, a new SDF-based 3D generative model trained in an end-to-end manner.
arXiv Detail & Related papers (2024-06-06T17:00:10Z) - Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and compact surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - DualConv: Dual Mesh Convolutional Networks for Shape Correspondence [44.94765770516059]
Convolutional neural networks have been extremely successful for 2D images and are readily extended to handle 3D voxel data.
In this paper we explore how these networks can be extended to the dual face-based representation of triangular meshes.
Our experiments demonstrate that building additionally convolutional models that explicitly leverage the neighborhood size regularity of dual meshes enables learning shape representations that perform on par or better than previous approaches.
arXiv Detail & Related papers (2021-03-23T11:22:47Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.