Gradient Origin Networks
- URL: http://arxiv.org/abs/2007.02798v5
- Date: Wed, 24 Mar 2021 14:15:29 GMT
- Title: Gradient Origin Networks
- Authors: Sam Bond-Taylor and Chris G. Willcocks
- Abstract summary: This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder.
Experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.
- Score: 8.952627620898074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a new type of generative model that is able to quickly
learn a latent representation without an encoder. This is achieved using
empirical Bayes to calculate the expectation of the posterior, which is
implemented by initialising a latent vector with zeros, then using the gradient
of the log-likelihood of the data with respect to this zero vector as new
latent points. The approach has similar characteristics to autoencoders, but
with a simpler architecture, and is demonstrated in a variational autoencoder
equivalent that permits sampling. This also allows implicit representation
networks to learn a space of implicit functions without requiring a
hypernetwork, retaining their representation advantages across datasets. The
experiments show that the proposed method converges faster, with significantly
lower reconstruction error than autoencoders, while requiring half the
parameters.
Related papers
- A Self-Encoder for Learning Nearest Neighbors [5.297261090056809]
The self-encoder learns to distribute the data samples in the embedding space so that they are linearly separable from one another.
Unlike regular nearest neighbors, the predictions resulting from this encoding of data are invariant to any scaling of features.
arXiv Detail & Related papers (2023-06-25T14:30:31Z) - Autoencoding for the 'Good Dictionary' of eigen pairs of the Koopman
Operator [0.0]
This paper proposes using deep autoencoders, a type of deep learning technique, to perform non-linear geometric transformations on raw data before computing Koopman eigen vectors.
To handle high-dimensional time series data, Takens's time delay embedding is presented as a pre-processing technique.
arXiv Detail & Related papers (2023-06-08T14:21:01Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Out-of-distribution Detection and Generation using Soft Brownian Offset
Sampling and Autoencoders [1.313418334200599]
Deep neural networks often suffer from overconfidence which can be partly remedied by improved out-of-distribution detection.
We propose a novel approach that allows for the generation of out-of-distribution datasets based on a given in-distribution dataset.
This new dataset can then be used to improve out-of-distribution detection for the given dataset and machine learning task at hand.
arXiv Detail & Related papers (2021-05-04T06:59:24Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.