Flexible Isosurface Extraction for Gradient-Based Mesh Optimization
- URL: http://arxiv.org/abs/2308.05371v1
- Date: Thu, 10 Aug 2023 06:40:19 GMT
- Title: Flexible Isosurface Extraction for Gradient-Based Mesh Optimization
- Authors: Tianchang Shen, Jacob Munkberg, Jon Hasselgren, Kangxue Yin, Zian
Wang, Wenzheng Chen, Zan Gojcic, Sanja Fidler, Nicholas Sharp, Jun Gao
- Abstract summary: This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field.
We introduce FlexiCubes, an isosurface representation specifically designed for optimizing an unknown mesh with respect to geometric, visual, or even physical objectives.
- Score: 65.76362454554754
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work considers gradient-based mesh optimization, where we iteratively
optimize for a 3D surface mesh by representing it as the isosurface of a scalar
field, an increasingly common paradigm in applications including
photogrammetry, generative modeling, and inverse physics. Existing
implementations adapt classic isosurface extraction algorithms like Marching
Cubes or Dual Contouring; these techniques were designed to extract meshes from
fixed, known fields, and in the optimization setting they lack the degrees of
freedom to represent high-quality feature-preserving meshes, or suffer from
numerical instabilities. We introduce FlexiCubes, an isosurface representation
specifically designed for optimizing an unknown mesh with respect to geometric,
visual, or even physical objectives. Our main insight is to introduce
additional carefully-chosen parameters into the representation, which allow
local flexible adjustments to the extracted mesh geometry and connectivity.
These parameters are updated along with the underlying scalar field via
automatic differentiation when optimizing for a downstream task. We base our
extraction scheme on Dual Marching Cubes for improved topological properties,
and present extensions to optionally generate tetrahedral and
hierarchically-adaptive meshes. Extensive experiments validate FlexiCubes on
both synthetic benchmarks and real-world applications, showing that it offers
significant improvements in mesh quality and geometric fidelity.
Related papers
- NASM: Neural Anisotropic Surface Meshing [38.8654207201197]
This paper introduces a new learning-based method, NASM, for anisotropic surface meshing.
Key idea is to embed an input mesh into a high-d Euclidean embedding space to preserve curvature-based anisotropic metric.
Then, we propose a novel feature-sensitive remeshing on the generated high-d embedding to automatically capture sharp geometric features.
arXiv Detail & Related papers (2024-10-30T15:20:10Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing [30.28380889862059]
We present a data-driven approach for automatic feature detection and remeshing.
Our algorithm improves over state-of-the-art by 26% normals F-score and 42% perceptual $textRMSE_textv$.
arXiv Detail & Related papers (2023-11-30T12:15:45Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Parametric Generative Schemes with Geometric Constraints for Encoding
and Synthesizing Airfoils [25.546237636065182]
Two deep learning-based generative schemes are proposed to capture the complexity of the design space while satisfying specific constraints.
The soft-constrained scheme generates airfoils with slight deviations from the expected geometric constraints, yet still converge to the reference airfoil.
The hard-constrained scheme produces airfoils with a wider range of geometric diversity while strictly adhering to the geometric constraints.
arXiv Detail & Related papers (2022-05-05T05:58:08Z) - Progressive Encoding for Neural Optimization [92.55503085245304]
We show the competence of the PPE layer for mesh transfer and its advantages compared to contemporary surface mapping techniques.
Most importantly, our technique is a parameterization-free method, and thus applicable to a variety of target shape representations.
arXiv Detail & Related papers (2021-04-19T08:22:55Z) - Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid
Representations [21.64457003420851]
We develop a hybrid neural surface representation that allows us to impose geometry-aware sampling and regularization.
We demonstrate that our method can be adopted to improve techniques for reconstructing neural implicit surfaces from multi-view images or point clouds.
arXiv Detail & Related papers (2020-12-11T15:51:04Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.