Deep Tiling: Texture Tile Synthesis Using a Deep Learning Approach
- URL: http://arxiv.org/abs/2103.07992v1
- Date: Sun, 14 Mar 2021 18:17:37 GMT
- Title: Deep Tiling: Texture Tile Synthesis Using a Deep Learning Approach
- Authors: Vasilis Toulatzis, Ioannis Fudos
- Abstract summary: In many cases a texture image cannot cover a large 3D model surface because of its small resolution.
Deep learning based texture synthesis has proven to be very effective in such cases.
We propose a novel approach to example-based texture synthesis by using a robust deep learning process.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Texturing is a fundamental process in computer graphics. Texture is leveraged
to enhance the visualization outcome for a 3D scene. In many cases a texture
image cannot cover a large 3D model surface because of its small resolution.
Conventional techniques like repeating, mirror repeating or clamp to edge do
not yield visually acceptable results. Deep learning based texture synthesis
has proven to be very effective in such cases. All deep texture synthesis
methods trying to create larger resolution textures are limited in terms of GPU
memory resources. In this paper, we propose a novel approach to example-based
texture synthesis by using a robust deep learning process for creating tiles of
arbitrary resolutions that resemble the structural components of an input
texture. In this manner, our method is firstly much less memory limited owing
to the fact that a new texture tile of small size is synthesized and merged
with the original texture and secondly can easily produce missing parts of a
large texture.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [47.78392889256976]
Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
arXiv Detail & Related papers (2023-12-18T17:17:08Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Implicit Feature Networks for Texture Completion from Partial 3D Data [56.93289686162015]
We generalize IF-Nets to texture completion from partial textured scans of humans and arbitrary objects.
Our model successfully in-paints the missing texture parts in consistence with the completed geometry.
arXiv Detail & Related papers (2020-09-20T15:48:17Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.