Measuring the Discrepancy between 3D Geometric Models using Directional
Distance Fields
- URL: http://arxiv.org/abs/2401.09736v1
- Date: Thu, 18 Jan 2024 05:31:53 GMT
- Title: Measuring the Discrepancy between 3D Geometric Models using Directional
Distance Fields
- Authors: Siyu Ren, Junhui Hou, Xiaodong Chen, Hongkai Xiong, and Wenping Wang
- Abstract summary: We propose DirDist, an efficient, effective, robust, and differentiable distance metric for 3D geometry data.
As a generic distance metric, DirDist has the potential to advance the field of 3D geometric modeling.
- Score: 98.15456815880911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Qualifying the discrepancy between 3D geometric models, which could be
represented with either point clouds or triangle meshes, is a pivotal issue
with board applications. Existing methods mainly focus on directly establishing
the correspondence between two models and then aggregating point-wise distance
between corresponding points, resulting in them being either inefficient or
ineffective. In this paper, we propose DirDist, an efficient, effective,
robust, and differentiable distance metric for 3D geometry data. Specifically,
we construct DirDist based on the proposed implicit representation of 3D
models, namely directional distance field (DDF), which defines the directional
distances of 3D points to a model to capture its local surface geometry. We
then transfer the discrepancy between two 3D geometric models as the
discrepancy between their DDFs defined on an identical domain, naturally
establishing model correspondence. To demonstrate the advantage of our DirDist,
we explore various distance metric-driven 3D geometric modeling tasks,
including template surface fitting, rigid registration, non-rigid registration,
scene flow estimation and human pose optimization. Extensive experiments show
that our DirDist achieves significantly higher accuracy under all tasks. As a
generic distance metric, DirDist has the potential to advance the field of 3D
geometric modeling. The source code is available at
\url{https://github.com/rsy6318/DirDist}.
Related papers
- DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local
Geometry-driven Distance Metric [62.365983810610985]
We propose a novel distance metric called Calibrated Local Geometry Distance (CLGD)
CLGD computes the difference between the underlying 3D surfaces calibrated and induced by a set of reference points.
As a generic metric, CLGD has the potential to advance 3D point cloud modeling.
arXiv Detail & Related papers (2023-06-01T11:16:20Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - CL3D: Unsupervised Domain Adaptation for Cross-LiDAR 3D Detection [16.021932740447966]
Domain adaptation for Cross-LiDAR 3D detection is challenging due to the large gap on the raw data representation.
We present an unsupervised domain adaptation method that overcomes above difficulties.
arXiv Detail & Related papers (2022-12-01T03:22:55Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection [15.244852122106634]
We propose an approach for incorporating the shape-aware 2D/3D constraints into the 3D detection framework.
Specifically, we employ the deep neural network to learn distinguished 2D keypoints in the 2D image domain.
For generating the ground truth of 2D/3D keypoints, an automatic model-fitting approach has been proposed.
arXiv Detail & Related papers (2021-08-25T08:50:06Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.