Self-Similarity Priors: Neural Collages as Differentiable Fractal
Representations
- URL: http://arxiv.org/abs/2204.07673v1
- Date: Fri, 15 Apr 2022 22:54:23 GMT
- Title: Self-Similarity Priors: Neural Collages as Differentiable Fractal
Representations
- Authors: Michael Poli, Winnie Xu, Stefano Massaroli, Chenlin Meng, Kuno Kim,
Stefano Ermon
- Abstract summary: We investigate the role of learning in the automated discovery of self-similarity and in its utilization for downstream tasks.
We design a novel class of implicit operators, Neural Collages, which represent data as the parameters of a self-referential, structured transformation.
We investigate how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation.
- Score: 73.14227103400964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many patterns in nature exhibit self-similarity: they can be compactly
described via self-referential transformations. Said patterns commonly appear
in natural and artificial objects, such as molecules, shorelines, galaxies and
even images. In this work, we investigate the role of learning in the automated
discovery of self-similarity and in its utilization for downstream tasks. To
this end, we design a novel class of implicit operators, Neural Collages, which
(1) represent data as the parameters of a self-referential, structured
transformation, and (2) employ hypernetworks to amortize the cost of finding
these parameters to a single forward pass. We investigate how to leverage the
representations produced by Neural Collages in various tasks, including data
compression and generation. Neural Collages image compressors are orders of
magnitude faster than other self-similarity-based algorithms during encoding
and offer compression rates competitive with implicit methods. Finally, we
showcase applications of Neural Collages for fractal art and as deep generative
models.
Related papers
- Neural inverse procedural modeling of knitting yarns from images [6.114281140793954]
We show that the complexity of yarn structures can be better encountered in terms of ensembles of networks that focus on individual characteristics.
We demonstrate that the combination of a carefully designed parametric, procedural yarn model with respective network ensembles as well as loss functions even allows robust parameter inference.
arXiv Detail & Related papers (2023-03-01T00:56:39Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Correlator Convolutional Neural Networks: An Interpretable Architecture
for Image-like Quantum Matter Data [15.283214387433082]
We develop a network architecture that discovers features in the data which are directly interpretable in terms of physical observables.
Our approach lends itself well to the construction of simple, end-to-end interpretable architectures.
arXiv Detail & Related papers (2020-11-06T17:04:10Z) - Autoencoder Image Interpolation by Shaping the Latent Space [12.482988592988868]
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
arXiv Detail & Related papers (2020-08-04T12:32:54Z) - Lossless Compression of Structured Convolutional Models via Lifting [14.63152363481139]
We introduce a simple and efficient technique to detect the symmetries and compress the neural models without loss of any information.
We demonstrate through experiments that such compression can lead to significant speedups of structured convolutional models.
arXiv Detail & Related papers (2020-07-13T08:02:27Z) - Network Bending: Expressive Manipulation of Deep Generative Models [0.2062593640149624]
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending.
We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
arXiv Detail & Related papers (2020-05-25T21:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.