3D Reconstruction of Sculptures from Single Images via Unsupervised
Domain Adaptation on Implicit Models
- URL: http://arxiv.org/abs/2210.04265v1
- Date: Sun, 9 Oct 2022 13:48:00 GMT
- Title: 3D Reconstruction of Sculptures from Single Images via Unsupervised
Domain Adaptation on Implicit Models
- Authors: Ziyi Chang, George Alex Koulieris, Hubert P. H. Shum
- Abstract summary: We propose an unsupervised 3D domain adaptation method for adapting a single-view 3D implicit reconstruction model from the source (real-world humans) to the target (sculptures) domain.
We have compared the generated shapes with other methods and conducted ablation studies as well as a user study to demonstrate the effectiveness of our adaptation method.
- Score: 11.647208461719906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acquiring the virtual equivalent of exhibits, such as sculptures, in virtual
reality (VR) museums, can be labour-intensive and sometimes infeasible. Deep
learning based 3D reconstruction approaches allow us to recover 3D shapes from
2D observations, among which single-view-based approaches can reduce the need
for human intervention and specialised equipment in acquiring 3D sculptures for
VR museums. However, there exist two challenges when attempting to use the
well-researched human reconstruction methods: limited data availability and
domain shift. Considering sculptures are usually related to humans, we propose
our unsupervised 3D domain adaptation method for adapting a single-view 3D
implicit reconstruction model from the source (real-world humans) to the target
(sculptures) domain. We have compared the generated shapes with other methods
and conducted ablation studies as well as a user study to demonstrate the
effectiveness of our adaptation method. We also deploy our results in a VR
application.
Related papers
- COSMU: Complete 3D human shape from monocular unconstrained images [24.08612483445495]
We present a novel framework to reconstruct complete 3D human shapes from a given target image by leveraging monocular unconstrained images.
The objective of this work is to reproduce high-quality details in regions of the reconstructed human body that are not visible in the input target.
arXiv Detail & Related papers (2024-07-15T10:06:59Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Towards unconstrained joint hand-object reconstruction from RGB videos [81.97694449736414]
Reconstructing hand-object manipulations holds a great potential for robotics and learning from human demonstrations.
We first propose a learning-free fitting approach for hand-object reconstruction which can seamlessly handle two-hand object interactions.
arXiv Detail & Related papers (2021-08-16T12:26:34Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Multi-View Consistency Loss for Improved Single-Image 3D Reconstruction
of Clothed People [36.30755368202957]
We present a novel method to improve the accuracy of the 3D reconstruction of clothed human shape from a single image.
The accuracy and completeness for reconstruction of clothed people is limited due to the large variation in shape resulting from clothing, hair, body size, pose and camera viewpoint.
arXiv Detail & Related papers (2020-09-29T17:18:00Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.