TGHop: An Explainable, Efficient and Lightweight Method for Texture
Generation
- URL: http://arxiv.org/abs/2107.04020v1
- Date: Thu, 8 Jul 2021 17:56:58 GMT
- Title: TGHop: An Explainable, Efficient and Lightweight Method for Texture
Generation
- Authors: Xuejing Lei, Ganning Zhao, Kaitai Zhang, C.-C. Jay Kuo
- Abstract summary: TGHop (an acronym of Texture Generation PixelHop) is proposed in this work.
TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high quality texture.
It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.
- Score: 28.185787626054704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An explainable, efficient and lightweight method for texture generation,
called TGHop (an acronym of Texture Generation PixelHop), is proposed in this
work. Although synthesis of visually pleasant texture can be achieved by deep
neural networks, the associated models are large in size, difficult to explain
in theory, and computationally expensive in training. In contrast, TGHop is
small in its model size, mathematically transparent, efficient in training and
inference, and able to generate high quality texture. Given an exemplary
texture, TGHop first crops many sample patches out of it to form a collection
of sample patches called the source. Then, it analyzes pixel statistics of
samples from the source and obtains a sequence of fine-to-coarse subspaces for
these patches by using the PixelHop++ framework. To generate texture patches
with TGHop, we begin with the coarsest subspace, which is called the core, and
attempt to generate samples in each subspace by following the distribution of
real samples. Finally, texture patches are stitched to form texture images of a
large size. It is demonstrated by experimental results that TGHop can generate
texture images of superior quality with a small model size and at a fast speed.
Related papers
- TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features [78.13246375582906]
We present a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface target colors.
Our approach achieves superior texture quality across 3D models in applications like game development.
arXiv Detail & Related papers (2025-03-20T18:35:03Z) - Real-time Free-view Human Rendering from Sparse-view RGB Videos using Double Unprojected Textures [87.80984588545589]
Real-time free-view human rendering from sparse-view RGB inputs is a challenging task due to the sensor scarcity and the tight time budget.
We present Double Unprojected Textures, which at the core disentangles coarse geometric deformation estimation from appearance synthesis.
arXiv Detail & Related papers (2024-12-17T18:57:38Z) - TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling [37.67373829836975]
We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
arXiv Detail & Related papers (2024-08-02T14:24:40Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - FlashTex: Fast Relightable Mesh Texturing with LightControlNet [105.4683880648901]
We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture.
We apply our approach to disentangle material/reflectance in the resulting texture so that the mesh can be properlylit and rendered in any lighting environment.
arXiv Detail & Related papers (2024-02-20T18:59:00Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Local Padding in Patch-Based GANs for Seamless Infinite-Sized Texture Synthesis [0.8192907805418583]
We propose a novel approach for generating texture images at large arbitrary sizes using GANs based on patch-by-patch generation.
Instead of zero-padding, the model uses textitlocal padding in the generator that shares border features between the generated patches.
Our method has a significant advancement beyond existing GANs-based texture models in terms of the quality and diversity of the generated textures.
arXiv Detail & Related papers (2023-09-05T15:57:23Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.