NITES: A Non-Parametric Interpretable Texture Synthesis Method
- URL: http://arxiv.org/abs/2009.01376v1
- Date: Wed, 2 Sep 2020 22:52:44 GMT
- Title: NITES: A Non-Parametric Interpretable Texture Synthesis Method
- Authors: Xuejing Lei, Ganning Zhao, C.-C. Jay Kuo
- Abstract summary: A non-parametric interpretable texture synthesis method, called the NITES method, is proposed in this work.
NITES is mathematically transparent and efficient in training and inference.
- Score: 41.13585191073405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A non-parametric interpretable texture synthesis method, called the NITES
method, is proposed in this work. Although automatic synthesis of visually
pleasant texture can be achieved by deep neural networks nowadays, the
associated generation models are mathematically intractable and their training
demands higher computational cost. NITES offers a new texture synthesis
solution to address these shortcomings. NITES is mathematically transparent and
efficient in training and inference. The input is a single exemplary texture
image. The NITES method crops out patches from the input and analyzes the
statistical properties of these texture patches to obtain their joint
spatial-spectral representations. Then, the probabilistic distributions of
samples in the joint spatial-spectral spaces are characterized. Finally,
numerous texture images that are visually similar to the exemplary texture
image can be generated automatically. Experimental results are provided to show
the superior quality of generated texture images and efficiency of the proposed
NITES method in terms of both training and inference time.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Diffusion-based Holistic Texture Rectification and Synthesis [26.144666226217062]
Traditional texture synthesis approaches focus on generating textures from pristine samples.
We propose a framework that synthesizes holistic textures from degraded samples in natural images.
arXiv Detail & Related papers (2023-09-26T08:44:46Z) - Neural Textured Deformable Meshes for Robust Analysis-by-Synthesis [17.920305227880245]
Our paper formulates triple vision tasks in a consistent manner using approximate analysis-by-synthesis.
We show that our analysis-by-synthesis is much more robust than conventional neural networks when evaluated on real-world images.
arXiv Detail & Related papers (2023-05-31T18:45:02Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - A Generative Model for Texture Synthesis based on Optimal Transport
between Feature Distributions [8.102785819558978]
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size.
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size in a very fast manner.
arXiv Detail & Related papers (2020-06-19T13:32:55Z) - Texture Interpolation for Probing Visual Perception [4.637185817866918]
We show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions.
We then propose the natural geodesics arising with the optimal transport metric to interpolate between arbitrary textures.
Compared to other CNN-based approaches, our method appears to match more closely the geometry of texture perception.
arXiv Detail & Related papers (2020-06-05T21:28:36Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.