Out-Of-Distribution Detection In Unsupervised Continual Learning
- URL: http://arxiv.org/abs/2204.05462v1
- Date: Tue, 12 Apr 2022 01:24:54 GMT
- Title: Out-Of-Distribution Detection In Unsupervised Continual Learning
- Authors: Jiangpeng He and Fengqing Zhu
- Abstract summary: Unsupervised continual learning aims to learn new tasks incrementally without requiring human annotations.
An out-of-distribution detector is required at beginning to identify whether each new data corresponds to a new task.
We propose a novel OOD detection method by correcting the output bias at first and then enhancing the output confidence for in-distribution data.
- Score: 7.800379384628357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised continual learning aims to learn new tasks incrementally without
requiring human annotations. However, most existing methods, especially those
targeted on image classification, only work in a simplified scenario by
assuming all new data belong to new tasks, which is not realistic if the class
labels are not provided. Therefore, to perform unsupervised continual learning
in real life applications, an out-of-distribution detector is required at
beginning to identify whether each new data corresponds to a new task or
already learned tasks, which still remains under-explored yet. In this work, we
formulate the problem for Out-of-distribution Detection in Unsupervised
Continual Learning (OOD-UCL) with the corresponding evaluation protocol. In
addition, we propose a novel OOD detection method by correcting the output bias
at first and then enhancing the output confidence for in-distribution data
based on task discriminativeness, which can be applied directly without
modifying the learning procedures and objectives of continual learning. Our
method is evaluated on CIFAR-100 dataset by following the proposed evaluation
protocol and we show improved performance compared with existing OOD detection
methods under the unsupervised continual learning scenario.
Related papers
- Unsupervised Transfer Learning via Adversarial Contrastive Training [3.227277661633986]
We propose a novel unsupervised transfer learning approach using adversarial contrastive training (ACT)
Our experimental results demonstrate outstanding classification accuracy with both fine-tuned linear probe and K-NN protocol across various datasets.
arXiv Detail & Related papers (2024-08-16T05:11:52Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Continually Learning Self-Supervised Representations with Projected
Functional Regularization [39.92600544186844]
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
arXiv Detail & Related papers (2021-12-30T11:59:23Z) - Gradient-based Novelty Detection Boosted by Self-supervised Binary
Classification [20.715158729811755]
Novelty detection aims to automatically identify out-of-distribution (OOD) data, without any prior knowledge of them.
We propose a novel, self-supervised approach that does not rely on any pre-defined OOD data.
In the evaluation with multiple datasets, the proposed approach consistently outperforms state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2021-12-18T01:17:15Z) - Lifelong Intent Detection via Multi-Strategy Rebalancing [18.424132535727217]
In this paper, we propose Lifelong Intent Detection (LID), which continually trains an ID model on new data to learn newly emerging intents.
Existing lifelong learning methods usually suffer from a serious imbalance between old and new data in the LID task.
We propose a novel lifelong learning method, Multi-Strategy Rebalancing (MSR), which consists of cosine normalization, hierarchical knowledge distillation, and inter-class margin loss.
arXiv Detail & Related papers (2021-08-10T04:35:13Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Continual Learning From Unlabeled Data Via Deep Clustering [7.704949298975352]
Continual learning aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives.
We introduce a new framework to make continual learning feasible in unsupervised mode by using pseudo label obtained from cluster assignments to update model.
arXiv Detail & Related papers (2021-04-14T23:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.