BinPlay: A Binary Latent Autoencoder for Generative Replay Continual
Learning
- URL: http://arxiv.org/abs/2011.14960v1
- Date: Wed, 25 Nov 2020 08:50:58 GMT
- Title: BinPlay: A Binary Latent Autoencoder for Generative Replay Continual
Learning
- Authors: Kamil Deja, Pawe{\l} Wawrzy\'nski, Daniel Marczak, Wojciech Masarczyk,
Tomasz Trzci\'nski
- Abstract summary: We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.
BinPlay is able to compute the binary embeddings of rehearsed samples on the fly without the need to keep them in memory.
- Score: 11.367079056418957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a binary latent space autoencoder architecture to rehearse
training samples for the continual learning of neural networks. The ability to
extend the knowledge of a model with new data without forgetting previously
learned samples is a fundamental requirement in continual learning. Existing
solutions address it by either replaying past data from memory, which is
unsustainable with growing training data, or by reconstructing past samples
with generative models that are trained to generalize beyond training data and,
hence, miss important details of individual samples. In this paper, we take the
best of both worlds and introduce a novel generative rehearsal approach called
BinPlay. Its main objective is to find a quality-preserving encoding of past
samples into precomputed binary codes living in the autoencoder's binary latent
space. Since we parametrize the formula for precomputing the codes only on the
chronological indices of the training samples, the autoencoder is able to
compute the binary embeddings of rehearsed samples on the fly without the need
to keep them in memory. Evaluation on three benchmark datasets shows up to a
twofold accuracy improvement of BinPlay versus competing generative replay
methods.
Related papers
- A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks [81.2624272756733]
In dense retrieval, deep encoders provide embeddings for both inputs and targets.
We train a small parametric corrector network that adjusts stale cached target embeddings.
Our approach matches state-of-the-art results even when no target embedding updates are made during training.
arXiv Detail & Related papers (2024-09-03T13:29:13Z) - Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video
Retrieval [67.52910255064762]
We design a simple dual-stream structure, including a temporal layer and a hash layer.
We first design a simple dual-stream structure, including a temporal layer and a hash layer.
With the help of semantic similarity knowledge obtained from self-supervision, the hash layer learns to capture information for semantic retrieval.
In this way, the model naturally preserves the disentangled semantics into binary codes.
arXiv Detail & Related papers (2023-10-12T03:21:12Z) - Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - A manifold learning perspective on representation learning: Learning
decoder and representations without an encoder [0.0]
Autoencoders are commonly used in representation learning.
Inspired by manifold learning, we show that the decoder can be trained on its own by learning the representations of the training samples.
Our approach of training the decoder alone facilitates representation learning even on small data sets.
arXiv Detail & Related papers (2021-08-31T15:08:50Z) - IB-DRR: Incremental Learning with Information-Back Discrete
Representation Replay [4.8666876477091865]
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes.
Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim.
However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning.
arXiv Detail & Related papers (2021-04-21T15:32:11Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.