On Demand Solid Texture Synthesis Using Deep 3D Networks
- URL: http://arxiv.org/abs/2001.04528v1
- Date: Mon, 13 Jan 2020 20:59:14 GMT
- Title: On Demand Solid Texture Synthesis Using Deep 3D Networks
- Authors: Jorge Gutierrez, Julien Rabin, Bruno Galerne, Thomas Hurtut
- Abstract summary: This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a novel approach for on demand volumetric texture
synthesis based on a deep learning framework that allows for the generation of
high quality 3D data at interactive rates. Based on a few example images of
textures, a generative network is trained to synthesize coherent portions of
solid textures of arbitrary sizes that reproduce the visual characteristics of
the examples along some directions. To cope with memory limitations and
computation complexity that are inherent to both high resolution and 3D
processing on the GPU, only 2D textures referred to as "slices" are generated
during the training stage. These synthetic textures are compared to exemplar
images via a perceptual loss function based on a pre-trained deep network. The
proposed network is very light (less than 100k parameters), therefore it only
requires sustainable training (i.e. few hours) and is capable of very fast
generation (around a second for $256^3$ voxels) on a single GPU. Integrated
with a spatially seeded PRNG the proposed generator network directly returns an
RGB value given a set of 3D coordinates. The synthesized volumes have good
visual results that are at least equivalent to the state-of-the-art patch based
approaches. They are naturally seamlessly tileable and can be fully generated
in parallel.
Related papers
- Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [47.78392889256976]
Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
arXiv Detail & Related papers (2023-12-18T17:17:08Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - CompNVS: Novel View Synthesis with Scene Completion [83.19663671794596]
We propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts.
We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area.
Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering.
arXiv Detail & Related papers (2022-07-23T09:03:13Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Deep Tiling: Texture Tile Synthesis Using a Deep Learning Approach [0.0]
In many cases a texture image cannot cover a large 3D model surface because of its small resolution.
Deep learning based texture synthesis has proven to be very effective in such cases.
We propose a novel approach to example-based texture synthesis by using a robust deep learning process.
arXiv Detail & Related papers (2021-03-14T18:17:37Z) - STS-GAN: Can We Synthesize Solid Texture with High Fidelity from
Arbitrary 2D Exemplar? [20.58364192180389]
We propose a novel generative adversarial nets-based framework (STS-GAN) to extend the given 2D exemplar to arbitrary 3D solid textures.
In STS-GAN, multi-scale 2D texture discriminators evaluate the similarity between the given 2D exemplar and slices from the generated 3D texture, promoting the 3D texture generator synthesizing realistic solid textures.
arXiv Detail & Related papers (2021-02-08T02:51:34Z) - Novel-View Human Action Synthesis [39.72702883597454]
We present a novel 3D reasoning to synthesize the target viewpoint.
We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh.
We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally.
arXiv Detail & Related papers (2020-07-06T15:11:51Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - GramGAN: Deep 3D Texture Synthesis From 2D Exemplars [7.553635339893189]
We present a novel texture synthesis framework, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image.
Inspired by recent advances in natural texture synthesis, we train deep neural models to generate textures by non-linearly combining learned noise frequencies.
To achieve a highly realistic output conditioned on an exemplar patch, we propose a novel loss function that combines ideas from both style transfer and generative adversarial networks.
arXiv Detail & Related papers (2020-06-29T15:22:03Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.