CULT: Continual Unsupervised Learning with Typicality-Based Environment
Detection
- URL: http://arxiv.org/abs/2207.08309v1
- Date: Sun, 17 Jul 2022 22:08:10 GMT
- Title: CULT: Continual Unsupervised Learning with Typicality-Based Environment
Detection
- Authors: Oliver Daniels-Koch
- Abstract summary: CULT (Continual Unsupervised Representation Learning with Typicality-Based Environment Detection) is a new algorithm for continual unsupervised learning with variational auto-encoders.
CULT significantly outperforms baseline continual unsupervised learning approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce CULT (Continual Unsupervised Representation Learning with
Typicality-Based Environment Detection), a new algorithm for continual
unsupervised learning with variational auto-encoders. CULT uses a simple
typicality metric in the latent space of a VAE to detect distributional shifts
in the environment, which is used in conjunction with generative replay and an
auxiliary environmental classifier to limit catastrophic forgetting in
unsupervised representation learning. In our experiments, CULT significantly
outperforms baseline continual unsupervised learning approaches. Code for this
paper can be found here: https://github.com/oliveradk/cult
Related papers
- No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - A Distinct Unsupervised Reference Model From The Environment Helps
Continual Learning [5.332329421663282]
Open-Set Semi-Supervised Continual Learning (OSSCL) is a more realistic semi-supervised continual learning setting.
We present a model with two distinct parts: (i) the reference network captures general-purpose and task-agnostic knowledge in the environment by using a broad spectrum of unlabeled samples, and (ii) the learner network is designed to learn task-specific representations by exploiting supervised samples.
arXiv Detail & Related papers (2023-01-11T15:05:36Z) - Latent Spectral Regularization for Continual Learning [21.445600749028923]
We study the phenomenon by investigating the geometric characteristics of the learner's latent space.
We propose a geometric regularizer that enforces weak requirements on the Laplacian spectrum of the latent space.
arXiv Detail & Related papers (2023-01-09T13:56:59Z) - Hyperspherical Consistency Regularization [45.00073340936437]
We explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning.
We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels.
arXiv Detail & Related papers (2022-06-02T02:41:13Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Supervised Anomaly Detection via Conditional Generative Adversarial
Network and Ensemble Active Learning [24.112455929818484]
Anomaly detection has wide applications in machine intelligence but is still a difficult unsolved problem.
Traditional unsupervised anomaly detectors are suboptimal while supervised models can easily make biased predictions.
We present a new supervised anomaly detector through introducing the novel Ensemble Active Learning Generative Adversarial Network (EAL-GAN)
arXiv Detail & Related papers (2021-04-24T13:47:50Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis [76.46004354572956]
We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
arXiv Detail & Related papers (2020-01-14T17:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.