Robust Zero Level-Set Extraction from Unsigned Distance Fields Based on
Double Covering
- URL: http://arxiv.org/abs/2310.03431v3
- Date: Wed, 10 Jan 2024 11:06:07 GMT
- Title: Robust Zero Level-Set Extraction from Unsigned Distance Fields Based on
Double Covering
- Authors: Fei Hou, Xuhui Chen, Wencheng Wang, Hong Qin, Ying He
- Abstract summary: We propose a new method for extracting the zero level-set from unsigned distance fields (UDFs)
DoubleCoverUDF takes a learned UDF and a user-specified parameter $r$ as input.
We show that the computed iso-surface is the boundary of the $r$-offset volume of the target zero level-set $S$.
- Score: 28.268387694075415
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we propose a new method, called DoubleCoverUDF, for extracting
the zero level-set from unsigned distance fields (UDFs). DoubleCoverUDF takes a
learned UDF and a user-specified parameter $r$ (a small positive real number)
as input and extracts an iso-surface with an iso-value $r$ using the
conventional marching cubes algorithm. We show that the computed iso-surface is
the boundary of the $r$-offset volume of the target zero level-set $S$, which
is an orientable manifold, regardless of the topology of $S$. Next, the
algorithm computes a covering map to project the boundary mesh onto $S$,
preserving the mesh's topology and avoiding folding. If $S$ is an orientable
manifold surface, our algorithm separates the double-layered mesh into a single
layer using a robust minimum-cut post-processing step. Otherwise, it keeps the
double-layered mesh as the output. We validate our algorithm by reconstructing
3D surfaces of open models and demonstrate its efficacy and effectiveness on
synthetic models and benchmark datasets. Our experimental results confirm that
our method is robust and produces meshes with better quality in terms of both
visual evaluation and quantitative measures than existing UDF-based methods.
The source code is available at https://github.com/jjjkkyz/DCUDF.
Related papers
- Sharper Model-free Reinforcement Learning for Average-reward Markov
Decision Processes [21.77276136591518]
We develop provably efficient model-free reinforcement learning (RL) algorithms for Markov Decision Processes (MDPs)
In the simulator setting, we propose a model-free RL algorithm that finds an $epsilon$-optimal policy using $widetildeO left(fracSAmathrmsp(h*)epsilon2+fracS2Amathrmsp(h*)epsilon2right)$ samples.
arXiv Detail & Related papers (2023-06-28T17:43:19Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Regret Lower Bound and Optimal Algorithm for High-Dimensional Contextual
Linear Bandit [10.604939762790517]
We prove a minimax lower bound, $mathcalObig((log d)fracalpha+12Tfrac1-alpha2+log Tbig)$, for the cumulative regret.
Second, we propose a simple and computationally efficient algorithm inspired by the general Upper Confidence Bound (UCB) strategy.
arXiv Detail & Related papers (2021-09-23T19:35:38Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Deep Learning Meets Projective Clustering [66.726500395069]
A common approach for compressing NLP networks is to encode the embedding layer as a matrix $AinmathbbRntimes d$.
Inspired by emphprojective clustering from computational geometry, we suggest replacing this subspace by a set of $k$ subspaces.
arXiv Detail & Related papers (2020-10-08T22:47:48Z) - A One-bit, Comparison-Based Gradient Estimator [29.600975900977343]
We show how one can use tools from one-bit compressed sensing to construct a robust and reliable estimator of the normalized gradient.
We propose an algorithm, coined SCOBO, that uses this estimator within a gradient descent scheme.
arXiv Detail & Related papers (2020-10-06T05:01:38Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.