On Demand Solid Texture Synthesis Using Deep 3D Networks
- URL: http://arxiv.org/abs/2001.04528v1
- Date: Mon, 13 Jan 2020 20:59:14 GMT
- Title: On Demand Solid Texture Synthesis Using Deep 3D Networks
- Authors: Jorge Gutierrez, Julien Rabin, Bruno Galerne, Thomas Hurtut
- Abstract summary: This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a novel approach for on demand volumetric texture
synthesis based on a deep learning framework that allows for the generation of
high quality 3D data at interactive rates. Based on a few example images of
textures, a generative network is trained to synthesize coherent portions of
solid textures of arbitrary sizes that reproduce the visual characteristics of
the examples along some directions. To cope with memory limitations and
computation complexity that are inherent to both high resolution and 3D
processing on the GPU, only 2D textures referred to as "slices" are generated
during the training stage. These synthetic textures are compared to exemplar
images via a perceptual loss function based on a pre-trained deep network. The
proposed network is very light (less than 100k parameters), therefore it only
requires sustainable training (i.e. few hours) and is capable of very fast
generation (around a second for $256^3$ voxels) on a single GPU. Integrated
with a spatially seeded PRNG the proposed generator network directly returns an
RGB value given a set of 3D coordinates. The synthesized volumes have good
visual results that are at least equivalent to the state-of-the-art patch based
approaches. They are naturally seamlessly tileable and can be fully generated
in parallel.
Related papers
- Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation [58.77520205498394]
This report presents a comprehensive framework for generating high-quality 3D shapes and textures from diverse input prompts.
The framework consists of 3D shape generation and texture generation.
This report details the system architecture, experimental results, and potential future directions to improve and expand the framework.
arXiv Detail & Related papers (2025-02-20T04:22:30Z) - TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting [48.97819552366636]
This paper presents TexGaussian, a novel method that uses octant-aligned 3D Gaussian Splatting for rapid PBR material generation.
Our method synthesizes more visually pleasing PBR materials and runs faster than previous methods in both unconditional and text-conditional scenarios.
arXiv Detail & Related papers (2024-11-29T12:19:39Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - CompNVS: Novel View Synthesis with Scene Completion [83.19663671794596]
We propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts.
We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area.
Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering.
arXiv Detail & Related papers (2022-07-23T09:03:13Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Deep Tiling: Texture Tile Synthesis Using a Deep Learning Approach [0.0]
In many cases a texture image cannot cover a large 3D model surface because of its small resolution.
Deep learning based texture synthesis has proven to be very effective in such cases.
We propose a novel approach to example-based texture synthesis by using a robust deep learning process.
arXiv Detail & Related papers (2021-03-14T18:17:37Z) - STS-GAN: Can We Synthesize Solid Texture with High Fidelity from
Arbitrary 2D Exemplar? [20.58364192180389]
We propose a novel generative adversarial nets-based framework (STS-GAN) to extend the given 2D exemplar to arbitrary 3D solid textures.
In STS-GAN, multi-scale 2D texture discriminators evaluate the similarity between the given 2D exemplar and slices from the generated 3D texture, promoting the 3D texture generator synthesizing realistic solid textures.
arXiv Detail & Related papers (2021-02-08T02:51:34Z) - Novel-View Human Action Synthesis [39.72702883597454]
We present a novel 3D reasoning to synthesize the target viewpoint.
We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh.
We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally.
arXiv Detail & Related papers (2020-07-06T15:11:51Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - GramGAN: Deep 3D Texture Synthesis From 2D Exemplars [7.553635339893189]
We present a novel texture synthesis framework, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image.
Inspired by recent advances in natural texture synthesis, we train deep neural models to generate textures by non-linearly combining learned noise frequencies.
To achieve a highly realistic output conditioned on an exemplar patch, we propose a novel loss function that combines ideas from both style transfer and generative adversarial networks.
arXiv Detail & Related papers (2020-06-29T15:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.