3DBooSTeR: 3D Body Shape and Texture Recovery
- URL: http://arxiv.org/abs/2010.12670v1
- Date: Fri, 23 Oct 2020 21:07:59 GMT
- Title: 3DBooSTeR: 3D Body Shape and Texture Recovery
- Authors: Alexandre Saint, Anis Kacem, Kseniya Cherenkova, Djamila Aouada
- Abstract summary: 3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
- Score: 76.91542440942189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from
a textured partial 3D scan. With the advent of virtual and augmented reality,
there is a demand for creating realistic and high-fidelity digital 3D human
representations. However, 3D scanning systems can only capture the 3D human
body shape up to some level of defects due to its complexity, including
occlusion between body parts, varying levels of details, shape deformations and
the articulated skeleton. Textured 3D mesh completion is thus important to
enhance 3D acquisitions. The proposed approach decouples the shape and texture
completion into two sequential tasks. The shape is recovered by an
encoder-decoder network deforming a template body mesh. The texture is
subsequently obtained by projecting the partial texture onto the template mesh
before inpainting the corresponding texture map with a novel approach. The
approach is validated on the 3DBodyTex.v2 dataset.
Related papers
- 3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary
Mesh Topology [1.4349415652822481]
Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.
Existing solutions either do not produce high-quality textures or deform the original high-resolution input mesh topology into a regular grid to make this generation easier but also lose the original mesh topology.
We present a novel framework called the 3DTextureTransformer that enables us to generate high-quality textures without deforming the original, high-resolution input mesh.
arXiv Detail & Related papers (2024-03-07T05:01:07Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network [14.389603490486364]
Reconstructing 3D human body shapes from 3D partial textured scans is a fundamental task for many computer vision and graphics applications.
We propose a new neural network architecture for 3D body shape and high-resolution texture completion.
arXiv Detail & Related papers (2022-08-18T11:06:10Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.