PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing
- URL: http://arxiv.org/abs/2311.18494v1
- Date: Thu, 30 Nov 2023 12:15:45 GMT
- Title: PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing
- Authors: Natalia Soboleva, Olga Gorbunova, Maria Ivanova, Evgeny Burnaev,
Matthias Nie{\ss}ner, Denis Zorin and Alexey Artemov
- Abstract summary: We present a data-driven approach for automatic feature detection and remeshing.
Our algorithm improves over state-of-the-art by 26% normals F-score and 42% perceptual $textRMSE_textv$.
- Score: 30.28380889862059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Surface reconstruction with preservation of geometric features is a
challenging computer vision task. Despite significant progress in implicit
shape reconstruction, state-of-the-art mesh extraction methods often produce
aliased, perceptually distorted surfaces and lack scalability to
high-resolution 3D shapes. We present a data-driven approach for automatic
feature detection and remeshing that requires only a coarse, aliased mesh as
input and scales to arbitrary resolution reconstructions. We define and learn a
collection of surface-based fields to (1) capture sharp geometric features in
the shape with an implicit vertexwise model and (2) approximate improvements in
normals alignment obtained by applying edge-flips with an edgewise model. To
support scaling to arbitrary complexity shapes, we learn our fields using local
triangulated patches, fusing estimates on complete surface meshes. Our feature
remeshing algorithm integrates the learned fields as sharp feature priors and
optimizes vertex placement and mesh connectivity for maximum expected surface
improvement. On a challenging collection of high-resolution shape
reconstructions in the ABC dataset, our algorithm improves over
state-of-the-art by 26% normals F-score and 42% perceptual
$\text{RMSE}_{\text{v}}$.
Related papers
- Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - Flexible Isosurface Extraction for Gradient-Based Mesh Optimization [65.76362454554754]
This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field.
We introduce FlexiCubes, an isosurface representation specifically designed for optimizing an unknown mesh with respect to geometric, visual, or even physical objectives.
arXiv Detail & Related papers (2023-08-10T06:40:19Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view
Human Reconstruction [97.3274868990133]
Geo-PIFu is a method to recover a 3D mesh from a monocular color image of a clothed person.
We show that, by both encoding query points and constraining global shape using latent voxel features, the reconstruction we obtain for clothed human meshes exhibits less shape distortion and improved surface details compared to competing methods.
arXiv Detail & Related papers (2020-06-15T01:11:48Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.