Implicit Feature Networks for Texture Completion from Partial 3D Data
- URL: http://arxiv.org/abs/2009.09458v1
- Date: Sun, 20 Sep 2020 15:48:17 GMT
- Title: Implicit Feature Networks for Texture Completion from Partial 3D Data
- Authors: Julian Chibane, Gerard Pons-Moll
- Abstract summary: We generalize IF-Nets to texture completion from partial textured scans of humans and arbitrary objects.
Our model successfully in-paints the missing texture parts in consistence with the completed geometry.
- Score: 56.93289686162015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior work to infer 3D texture use either texture atlases, which require
uv-mappings and hence have discontinuities, or colored voxels, which are memory
inefficient and limited in resolution. Recent work, predicts RGB color at every
XYZ coordinate forming a texture field, but focus on completing texture given a
single 2D image. Instead, we focus on 3D texture and geometry completion from
partial and incomplete 3D scans. IF-Nets have recently achieved
state-of-the-art results on 3D geometry completion using a multi-scale deep
feature encoding, but the outputs lack texture. In this work, we generalize
IF-Nets to texture completion from partial textured scans of humans and
arbitrary objects. Our key insight is that 3D texture completion benefits from
incorporating local and global deep features extracted from both the 3D partial
texture and completed geometry. Specifically, given the partial 3D texture and
the 3D geometry completed with IF-Nets, our model successfully in-paints the
missing texture parts in consistence with the completed geometry. Our model won
the SHARP ECCV'20 challenge, achieving highest performance on all challenges.
Related papers
- DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation [149.77077125310805]
We present DreamMesh, a novel text-to-3D architecture that pivots on well-defined surfaces (triangle meshes) to generate high-fidelity explicit 3D model.
In the coarse stage, the mesh is first deformed by text-guided Jacobians and then DreamMesh textures the mesh with an interlaced use of 2D diffusion models.
In the fine stage, DreamMesh jointly manipulates the mesh and refines the texture map, leading to high-quality triangle meshes with high-fidelity textured materials.
arXiv Detail & Related papers (2024-09-11T17:59:02Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network [14.389603490486364]
Reconstructing 3D human body shapes from 3D partial textured scans is a fundamental task for many computer vision and graphics applications.
We propose a new neural network architecture for 3D body shape and high-resolution texture completion.
arXiv Detail & Related papers (2022-08-18T11:06:10Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.