CTGAN: Semantic-guided Conditional Texture Generator for 3D Shapes
- URL: http://arxiv.org/abs/2402.05728v1
- Date: Thu, 8 Feb 2024 15:00:15 GMT
- Title: CTGAN: Semantic-guided Conditional Texture Generator for 3D Shapes
- Authors: Yi-Ting Pan, Chai-Rong Lee, Shu-Ho Fan, Jheng-Wei Su, Jia-Bin Huang,
Yung-Yu Chuang, Hung-Kuo Chu
- Abstract summary: Generative networks such as StyleGAN have advanced image synthesis, but generating 3D objects with high-fidelity textures is still not well explored.
We propose the Semantic-guided Conditional Texture Generator (CTGAN), producing high-quality textures for 3D shapes that are consistent with the viewing angle.
CTGAN utilizes the disentangled nature of StyleGAN to finely manipulate the input latent codes, enabling explicit control over both the style and structure of the generated textures.
- Score: 23.553141179900763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The entertainment industry relies on 3D visual content to create immersive
experiences, but traditional methods for creating textured 3D models can be
time-consuming and subjective. Generative networks such as StyleGAN have
advanced image synthesis, but generating 3D objects with high-fidelity textures
is still not well explored, and existing methods have limitations. We propose
the Semantic-guided Conditional Texture Generator (CTGAN), producing
high-quality textures for 3D shapes that are consistent with the viewing angle
while respecting shape semantics. CTGAN utilizes the disentangled nature of
StyleGAN to finely manipulate the input latent codes, enabling explicit control
over both the style and structure of the generated textures. A coarse-to-fine
encoder architecture is introduced to enhance control over the structure of the
resulting textures via input segmentation. Experimental results show that CTGAN
outperforms existing methods on multiple quality metrics and achieves
state-of-the-art performance on texture generation in both conditional and
unconditional settings.
Related papers
- LaFiTe: A Generative Latent Field for 3D Native Texturing [72.05710323154288]
Existing native approaches are sparse by the absence of a powerful and versatile representation, which severely limits the fidelity and generality of their generated textures.<n>We introduce LaFiTe, which generates high-quality textures constrained by a sparse color representation and UV parameterization.
arXiv Detail & Related papers (2025-12-04T13:33:49Z) - TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond [42.93031959503468]
TEXTRIX is a native 3D attribute generation framework for high-fidelity texture synthesis and downstream applications.<n>Our approach constructs a latent 3D attribute grid and leverages a Diffusion Transformer equipped with sparse attention.<n>Built upon this native representation, the framework naturally extends to high-precision 3D segmentation by training the same architecture to predict semantic attributes on the grid.
arXiv Detail & Related papers (2025-12-02T18:18:20Z) - End-to-End Fine-Tuning of 3D Texture Generation using Differentiable Rewards [8.953379216683732]
We propose an end-to-end differentiable, reinforcement-learning-free framework that embeds human feedback, expressed as differentiable reward functions, directly into the 3D texture pipeline.<n>By back-propagating preference signals through both geometric and appearance modules, our method generates textures that respect the 3D geometry structure and align with desired criteria.
arXiv Detail & Related papers (2025-06-23T06:24:12Z) - UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes [35.667175445637604]
We present UniTEX, a novel two-stage 3D texture generation framework.<n>UniTEX achieves superior visual quality and texture integrity compared to existing approaches.
arXiv Detail & Related papers (2025-05-29T08:58:41Z) - MVPainter: Accurate and Detailed 3D Texture Generation via Multi-View Diffusion with Geometric Control [1.8463601973573158]
We investigate 3D texture generation through the lens of three core dimensions: reference-texture alignment, geometry-texture consistency, and local texture quality.<n>We propose MVPainter, which employs data filtering and augmentation strategies to enhance texture fidelity and detail.<n>We extract physically-based rendering (PBR) attributes from the generated views to produce PBR meshes suitable for real-world rendering applications.
arXiv Detail & Related papers (2025-05-19T02:40:24Z) - TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features [78.13246375582906]
We present a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface target colors.
Our approach achieves superior texture quality across 3D models in applications like game development.
arXiv Detail & Related papers (2025-03-20T18:35:03Z) - Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation [56.862552362223425]
This report presents a comprehensive framework for generating high-quality 3D shapes and textures from diverse input prompts.
The framework consists of 3D shape generation and texture generation.
This report details the system architecture, experimental results, and potential future directions to improve and expand the framework.
arXiv Detail & Related papers (2025-02-20T04:22:30Z) - Make-A-Texture: Fast Shape-Aware Texture Generation in 3 Seconds [11.238020531599405]
We present Make-A-Texture, a new framework that efficiently synthesizes high-resolution texture maps from textual prompts for given 3D geometries.
A significant feature of our method is its remarkable efficiency, achieving a full texture generation within an end-to-end runtime of just 3.07 seconds on a single NVIDIA H100 GPU.
Our work significantly improves the applicability and practicality of texture generation models for real-world 3D content creation, including interactive creation and text-guided texture editing.
arXiv Detail & Related papers (2024-12-10T18:58:29Z) - Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - EucliDreamer: Fast and High-Quality Texturing for 3D Models with Depth-Conditioned Stable Diffusion [5.158983929861116]
We present EucliDreamer, a simple and effective method to generate textures for 3D models given text and prompts.
The texture is parametized as an implicit function on the 3D surface, which is optimized with the Score Distillation Sampling (SDS) process and differentiable rendering.
arXiv Detail & Related papers (2024-04-16T04:44:16Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting [57.14748263512924]
CG3D is a method for compositionally generating scalable 3D assets.
Gamma radiance fields, parameterized to allow for compositions of objects, possess the capability to enable semantically and physically consistent scenes.
arXiv Detail & Related papers (2023-11-29T18:55:38Z) - TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision [114.56048848216254]
We present a novel framework, TAPS3D, to train a text-guided 3D shape generator with pseudo captions.
Based on rendered 2D images, we retrieve relevant words from the CLIP vocabulary and construct pseudo captions using templates.
Our constructed captions provide high-level semantic supervision for generated 3D shapes.
arXiv Detail & Related papers (2023-03-23T13:53:16Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z) - Convolutional Generation of Textured 3D Meshes [34.20939983046376]
We propose a framework that can generate triangle meshes and associated high-resolution texture maps, using only 2D supervision from single-view natural images.
A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.
We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text.
arXiv Detail & Related papers (2020-06-13T15:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.