Encoding Binary Concepts in the Latent Space of Generative Models for
Enhancing Data Representation
- URL: http://arxiv.org/abs/2303.12255v1
- Date: Wed, 22 Mar 2023 01:45:35 GMT
- Title: Encoding Binary Concepts in the Latent Space of Generative Models for
Enhancing Data Representation
- Authors: Zizhao Hu, Mohammad Rostami
- Abstract summary: We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders.
We demonstrate that this method can boost existing models to learn more transferable representations and generate more representative samples for the input distribution.
- Score: 12.013345715187285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Binary concepts are empirically used by humans to generalize efficiently. And
they are based on Bernoulli distribution which is the building block of
information. These concepts span both low-level and high-level features such as
"large vs small" and "a neuron is active or inactive". Binary concepts are
ubiquitous features and can be used to transfer knowledge to improve model
generalization. We propose a novel binarized regularization to facilitate
learning of binary concepts to improve the quality of data generation in
autoencoders. We introduce a binarizing hyperparameter $r$ in data generation
process to disentangle the latent space symmetrically. We demonstrate that this
method can be applied easily to existing variational autoencoder (VAE) variants
to encourage symmetric disentanglement, improve reconstruction quality, and
prevent posterior collapse without computation overhead. We also demonstrate
that this method can boost existing models to learn more transferable
representations and generate more representative samples for the input
distribution which can alleviate catastrophic forgetting using generative
replay under continual learning settings.
Related papers
- Prioritized Generative Replay [121.83947140497655]
We propose a prioritized, parametric version of an agent's memory, using generative models to capture online experience.
This paradigm enables densification of past experience, with new generations that benefit from the generative model's generalization capacity.
We show this recipe can be instantiated using conditional diffusion models and simple relevance functions.
arXiv Detail & Related papers (2024-10-23T17:59:52Z) - Dual Space Training for GANs: A Pathway to Efficient and Creative Generative Models [0.0]
Generative Adversarial Networks (GANs) have demonstrated remarkable advancements in generative modeling.
This paper proposes a novel optimization approach that transforms the training process by operating within a dual space of the initial data.
By training GANs on the encoded representations in the dual space, the generative process becomes significantly more efficient and potentially reveals underlying patterns beyond human recognition.
arXiv Detail & Related papers (2024-10-22T03:44:13Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Learning Sparse Latent Representations for Generator Model [7.467412443287767]
We present a new unsupervised learning method to enforce sparsity on the latent space for the generator model.
Our model consists of only one top-down generator network that maps the latent variable to the observed data.
arXiv Detail & Related papers (2022-09-20T18:58:24Z) - The Transitive Information Theory and its Application to Deep Generative
Models [0.0]
Variational Autoencoder (VAE) could be pushed in two opposite directions.
Existing methods narrow the issues to the rate-distortion trade-off between compression and reconstruction.
We develop a system that learns a hierarchy of disentangled representation together with a mechanism for recombining the learned representation for generalization.
arXiv Detail & Related papers (2022-03-09T22:35:02Z) - IB-DRR: Incremental Learning with Information-Back Discrete
Representation Replay [4.8666876477091865]
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes.
Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim.
However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning.
arXiv Detail & Related papers (2021-04-21T15:32:11Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.