Vehicle Reconstruction and Texture Estimation Using Deep Implicit
Semantic Template Mapping
- URL: http://arxiv.org/abs/2011.14642v2
- Date: Mon, 29 Mar 2021 05:32:08 GMT
- Title: Vehicle Reconstruction and Texture Estimation Using Deep Implicit
Semantic Template Mapping
- Authors: Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao
Yu, Jinli Suo, Yebin Liu
- Abstract summary: We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input.
By fusing the global and local features together, our approach is capable to generate consistent and detailed texture in both visible and invisible areas.
- Score: 32.580904361799966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce VERTEX, an effective solution to recover 3D shape and intrinsic
texture of vehicles from uncalibrated monocular input in real-world street
environments. To fully utilize the template prior of vehicles, we propose a
novel geometry and texture joint representation, based on implicit semantic
template mapping. Compared to existing representations which infer 3D texture
distribution, our method explicitly constrains the texture distribution on the
2D surface of the template as well as avoids limitations of fixed resolution
and topology. Moreover, by fusing the global and local features together, our
approach is capable to generate consistent and detailed texture in both visible
and invisible areas. We also contribute a new synthetic dataset containing 830
elaborate textured car models labeled with sparse key points and rendered using
Physically Based Rendering (PBRT) system with measured HDRI skymaps to obtain
highly realistic images. Experiments demonstrate the superior performance of
our approach on both testing dataset and in-the-wild images. Furthermore, the
presented technique enables additional applications such as 3D vehicle texture
transfer and material identification.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function [9.414880946870916]
We propose a novel 3D reconstruction and semantic mapping system using LiDAR and camera sensors.
An Adaptive Truncated Function is introduced to describe surfaces implicitly, which can deal with different LiDAR point sparsities.
An optimal image patch selection strategy is proposed to estimate the optimal semantic class for each triangle mesh.
arXiv Detail & Related papers (2022-02-28T15:11:25Z) - Semi-supervised Synthesis of High-Resolution Editable Textures for 3D
Humans [14.098628848491147]
We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup.
Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes.
arXiv Detail & Related papers (2021-03-31T17:58:34Z) - Real Time Incremental Foveal Texture Mapping for Autonomous Vehicles [11.702817783491616]
The generated detailed map serves as a virtual test bed for various vision and planning algorithms.
It can also serve as a background map for various vision and planning algorithms.
arXiv Detail & Related papers (2021-01-16T07:41:24Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.