Projective Urban Texturing
- URL: http://arxiv.org/abs/2201.10938v1
- Date: Tue, 25 Jan 2022 14:56:52 GMT
- Title: Projective Urban Texturing
- Authors: Yiangos Georgiou and Melinos Averkiou and Tom Kelly and Evangelos
Kalogerakis
- Abstract summary: We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
- Score: 8.349665441428925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a method for automatic generation of textures for 3D city
meshes in immersive urban environments. Many recent pipelines capture or
synthesize large quantities of city geometry using scanners or procedural
modeling pipelines. Such geometry is intricate and realistic, however the
generation of photo-realistic textures for such large scenes remains a problem.
We propose to generate textures for input target 3D meshes driven by the
textural style present in readily available datasets of panoramic photos
capturing urban environments. Re-targeting such 2D datasets to 3D geometry is
challenging because the underlying shape, size, and layout of the urban
structures in the photos do not correspond to the ones in the target meshes.
Photos also often have objects (e.g., trees, vehicles) that may not even be
present in the target geometry.To address these issues we present a method,
called Projective Urban Texturing (PUT), which re-targets textural style from
real-world panoramic images to unseen urban meshes. PUT relies on contrastive
and adversarial training of a neural architecture designed for unpaired
image-to-texture translation. The generated textures are stored in a texture
atlas applied to the target 3D mesh geometry. To promote texture consistency,
PUT employs an iterative procedure in which texture synthesis is conditioned on
previously generated, adjacent textures. We demonstrate both quantitative and
qualitative evaluation of the generated textures.
Related papers
- Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures [0.06990493129893112]
High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
arXiv Detail & Related papers (2023-11-28T07:55:25Z) - Generating Texture for 3D Human Avatar from a Single Image using
Sampling and Refinement Networks [8.659903550327442]
We propose a texture synthesis method for a 3D human avatar that incorporates geometry information.
A sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh.
To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map.
arXiv Detail & Related papers (2023-05-01T16:44:02Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Implicit Feature Networks for Texture Completion from Partial 3D Data [56.93289686162015]
We generalize IF-Nets to texture completion from partial textured scans of humans and arbitrary objects.
Our model successfully in-paints the missing texture parts in consistence with the completed geometry.
arXiv Detail & Related papers (2020-09-20T15:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.