Texture Generation with Neural Cellular Automata
- URL: http://arxiv.org/abs/2105.07299v1
- Date: Sat, 15 May 2021 22:05:46 GMT
- Title: Texture Generation with Neural Cellular Automata
- Authors: Alexander Mordvintsev, Eyvind Niklasson, Ettore Randazzo
- Abstract summary: We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
- Score: 64.70093734012121
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural Cellular Automata (NCA) have shown a remarkable ability to learn the
required rules to "grow" images, classify morphologies, segment images, as well
as to do general computation such as path-finding. We believe the inductive
prior they introduce lends itself to the generation of textures. Textures in
the natural world are often generated by variants of locally interacting
reaction-diffusion systems. Human-made textures are likewise often generated in
a local manner (textile weaving, for instance) or using rules with local
dependencies (regular grids or geometric patterns). We demonstrate learning a
texture generator from a single template image, with the generation method
being embarrassingly parallel, exhibiting quick convergence and high fidelity
of output, and requiring only some minimal assumptions around the underlying
state manifold. Furthermore, we investigate properties of the learned models
that are both useful and interesting, such as non-stationary dynamics and an
inherent robustness to damage. Finally, we make qualitative claims that the
behaviour exhibited by the NCA model is a learned, distributed, local algorithm
to generate a texture, setting our method apart from existing work on texture
generation. We discuss the advantages of such a paradigm.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - Multi-Texture Synthesis through Signal Responsive Neural Cellular Automata [44.99833362998488]
We train a single NCA for the evolution of multiple textures, based on individual examples.
Our solution provides texture information in the state of each cell, in the form of an internally coded genomic signal, which enables the NCA to generate the expected texture.
arXiv Detail & Related papers (2024-07-08T14:36:20Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Localized Text-to-Image Generation for Free via Cross Attention Control [154.06530917754515]
We show that localized generation can be achieved by simply controlling cross attention maps during inference.
Our proposed cross attention control (CAC) provides new open-vocabulary localization abilities to standard text-to-image models.
arXiv Detail & Related papers (2023-06-26T12:15:06Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - A cellular automata approach to local patterns for texture recognition [3.42658286826597]
We propose a method for texture descriptors that combines the representation power of complex objects by cellular automata with the known effectiveness of local descriptors in texture analysis.
Our proposal outperforms other classical and state-of-the-art approaches, especially in the real-world problem.
arXiv Detail & Related papers (2020-07-15T03:25:51Z) - Reorganizing local image features with chaotic maps: an application to
texture recognition [0.0]
We propose a chaos-based local descriptor for texture recognition.
We map the image into the three-dimensional Euclidean space, iterate a chaotic map over this three-dimensional structure and convert it back to the original image.
The performance of our method was verified on the classification of benchmark databases and in the identification of Brazilian plant species based on the texture of the leaf surface.
arXiv Detail & Related papers (2020-07-15T03:15:01Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.