D-IF: Uncertainty-aware Human Digitization via Implicit Distribution
Field
- URL: http://arxiv.org/abs/2308.08857v2
- Date: Tue, 17 Oct 2023 05:27:05 GMT
- Title: D-IF: Uncertainty-aware Human Digitization via Implicit Distribution
Field
- Authors: Xueting Yang, Yihao Luo, Yuliang Xiu, Wei Wang, Hao Xu, Zhaoxin Fan
- Abstract summary: We propose replacing the implicit value with an adaptive uncertainty distribution, to differentiate between points based on their distance to the surface.
This simple value to distribution'' transition yields significant improvements on nearly all the baselines.
Results demonstrate that the models trained using our uncertainty distribution loss, can capture more intricate wrinkles, and realistic limbs.
- Score: 16.301611237147863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic virtual humans play a crucial role in numerous industries, such as
metaverse, intelligent healthcare, and self-driving simulation. But creating
them on a large scale with high levels of realism remains a challenge. The
utilization of deep implicit function sparks a new era of image-based 3D
clothed human reconstruction, enabling pixel-aligned shape recovery with fine
details. Subsequently, the vast majority of works locate the surface by
regressing the deterministic implicit value for each point. However, should all
points be treated equally regardless of their proximity to the surface? In this
paper, we propose replacing the implicit value with an adaptive uncertainty
distribution, to differentiate between points based on their distance to the
surface. This simple ``value to distribution'' transition yields significant
improvements on nearly all the baselines. Furthermore, qualitative results
demonstrate that the models trained using our uncertainty distribution loss,
can capture more intricate wrinkles, and realistic limbs. Code and models are
available for research purposes at https://github.com/psyai-net/D-IF_release.
Related papers
- Source-Free and Image-Only Unsupervised Domain Adaptation for Category
Level Object Pose Estimation [18.011044932979143]
3DUDA is a method capable of adapting to a nuisance-ridden target domain without 3D or depth data.
We represent object categories as simple cuboid meshes, and harness a generative model of neural feature activations.
We show that our method simulates fine-tuning on a global pseudo-labeled dataset under mild assumptions.
arXiv Detail & Related papers (2024-01-19T17:48:05Z) - 3D Human Mesh Estimation from Virtual Markers [34.703241940871635]
We present an intermediate representation, named virtual markers, which learns 64 landmark keypoints on the body surface.
Our approach outperforms the state-of-the-art methods on three datasets.
arXiv Detail & Related papers (2023-03-21T10:30:43Z) - Finding Differences Between Transformers and ConvNets Using
Counterfactual Simulation Testing [82.67716657524251]
We present a counterfactual framework that allows us to study the robustness of neural networks with respect to naturalistic variations.
Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers.
arXiv Detail & Related papers (2022-11-29T18:59:23Z) - CPPF++: Uncertainty-Aware Sim2Real Object Pose Estimation by Vote Aggregation [67.12857074801731]
We introduce a novel method, CPPF++, designed for sim-to-real pose estimation.
To address the challenge posed by vote collision, we propose a novel approach that involves modeling the voting uncertainty.
We incorporate several innovative modules, including noisy pair filtering, online alignment optimization, and a feature ensemble.
arXiv Detail & Related papers (2022-11-24T03:27:00Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction [63.3021778885906]
3D bounding boxes are a widespread intermediate representation in many computer vision applications.
We propose methods for leveraging our autoregressive model to make high confidence predictions and meaningful uncertainty measures.
We release a simulated dataset, COB-3D, which highlights new types of ambiguity that arise in real-world robotics applications.
arXiv Detail & Related papers (2022-10-13T23:57:40Z) - Self-supervised Human Mesh Recovery with Cross-Representation Alignment [20.69546341109787]
Self-supervised human mesh recovery methods have poor generalizability due to limited availability and diversity of 3D-annotated benchmark datasets.
We propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints)
This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: richness from sparse representation and robustness from dense representation.
arXiv Detail & Related papers (2022-09-10T04:47:20Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Deep Bingham Networks: Dealing with Uncertainty and Ambiguity in Pose
Estimation [74.76155168705975]
Deep Bingham Networks (DBN) can handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data.
DBN extends the state of the art direct pose regression networks by (i) a multi-hypotheses prediction head which can yield different distribution modes.
We propose new training strategies so as to avoid mode or posterior collapse during training and to improve numerical stability.
arXiv Detail & Related papers (2020-12-20T19:20:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.