3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary
Mesh Topology
- URL: http://arxiv.org/abs/2403.04225v1
- Date: Thu, 7 Mar 2024 05:01:07 GMT
- Title: 3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary
Mesh Topology
- Authors: Dharma KC, Clayton T. Morrison
- Abstract summary: Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.
Existing solutions either do not produce high-quality textures or deform the original high-resolution input mesh topology into a regular grid to make this generation easier but also lose the original mesh topology.
We present a novel framework called the 3DTextureTransformer that enables us to generate high-quality textures without deforming the original, high-resolution input mesh.
- Score: 1.4349415652822481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning to generate textures for a novel 3D mesh given a collection of 3D
meshes and real-world 2D images is an important problem with applications in
various domains such as 3D simulation, augmented and virtual reality, gaming,
architecture, and design. Existing solutions either do not produce high-quality
textures or deform the original high-resolution input mesh topology into a
regular grid to make this generation easier but also lose the original mesh
topology. In this paper, we present a novel framework called the
3DTextureTransformer that enables us to generate high-quality textures without
deforming the original, high-resolution input mesh. Our solution, a hybrid of
geometric deep learning and StyleGAN-like architecture, is flexible enough to
work on arbitrary mesh topologies and also easily extensible to texture
generation for point cloud representations. Our solution employs a
message-passing framework in 3D in conjunction with a StyleGAN-like
architecture for 3D texture generation. The architecture achieves
state-of-the-art performance among a class of solutions that can learn from a
collection of 3D geometry and real-world 2D images while working with any
arbitrary mesh topology.
Related papers
- Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner [34.78919665494048]
CraftsMan can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces.
Our method achieves high efficacy in producing superior-quality 3D assets compared to existing methods.
arXiv Detail & Related papers (2024-05-23T18:30:12Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - Pruning-based Topology Refinement of 3D Mesh using a 2D Alpha Mask [6.103988053817792]
We present a method to refine the topology of any 3D mesh through a face-pruning strategy.
Our solution leverages a differentiable that renders each face as a 2D soft map.
Because our module is agnostic to the network that produces the 3D mesh, it can be easily plugged into any self-supervised image-based 3D reconstruction pipeline.
arXiv Detail & Related papers (2022-10-17T14:51:38Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.