Texturify: Generating Textures on 3D Shape Surfaces
- URL: http://arxiv.org/abs/2204.02411v1
- Date: Tue, 5 Apr 2022 18:00:04 GMT
- Title: Texturify: Generating Textures on 3D Shape Surfaces
- Authors: Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias
Nie{\ss}ner, Angela Dai
- Abstract summary: We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
- Score: 34.726179801982646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Texture cues on 3D objects are key to compelling visual representations, with
the possibility to create high visual fidelity with inherent spatial
consistency across different views. Since the availability of textured 3D
shapes remains very limited, learning a 3D-supervised data-driven method that
predicts a texture based on the 3D input is very challenging. We thus propose
Texturify, a GAN-based method that leverages a 3D shape dataset of an object
class and learns to reproduce the distribution of appearances observed in real
images by generating high-quality textures. In particular, our method does not
require any 3D color supervision or correspondence between shape geometry and
images to learn the texturing of 3D objects. Texturify operates directly on the
surface of the 3D objects by introducing face convolutional operators on a
hierarchical 4-RoSy parametrization to generate plausible object-specific
textures. Employing differentiable rendering and adversarial losses that
critique individual views and consistency across views, we effectively learn
the high-quality surface texturing distribution from real-world images.
Experiments on car and chair shape collections show that our approach
outperforms state of the art by an average of 22% in FID score.
Related papers
- Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.
Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation [39.702921832009466]
We introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets.
We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors.
We are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks.
arXiv Detail & Related papers (2024-12-09T18:59:45Z) - ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models [65.22994156658918]
We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
arXiv Detail & Related papers (2024-03-04T07:57:05Z) - 3D-TexSeg: Unsupervised Segmentation of 3D Texture using Mutual
Transformer Learning [11.510823733292519]
This paper presents an original framework for the unsupervised segmentation of the 3D texture on the mesh manifold.
We devise a mutual transformer-based system comprising a label generator and a cleaner.
Experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and SOTA unsupervised techniques.
arXiv Detail & Related papers (2023-11-17T17:13:14Z) - Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture [47.44029968307207]
We propose a novel framework for simultaneous high-fidelity recovery of object shapes and textures from single-view images.
Our approach utilizes the proposed Single-view neural implicit Shape and Radiance field (SSR) representations to leverage both explicit 3D shape supervision and volume rendering.
A distinctive feature of our framework is its ability to generate fine-grained textured meshes while seamlessly integrating rendering capabilities into the single-view 3D reconstruction model.
arXiv Detail & Related papers (2023-11-01T11:46:15Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - 3D-GIF: 3D-Controllable Object Generation via Implicit Factorized
Representations [31.095503715696722]
We propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions.
We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes.
This is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions.
arXiv Detail & Related papers (2022-03-12T15:23:17Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve [54.054575408582565]
We propose to leverage existing large-scale datasets of 3D models to understand the underlying 3D structure of objects seen in an image.
We present Mask2CAD, which jointly detects objects in real-world images and for each detected object, optimize for the most similar CAD model and its pose.
This produces a clean, lightweight representation of the objects in an image.
arXiv Detail & Related papers (2020-07-26T00:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.