Continual Improvement of Threshold-Based Novelty Detection
- URL: http://arxiv.org/abs/2309.02551v1
- Date: Tue, 5 Sep 2023 19:37:45 GMT
- Title: Continual Improvement of Threshold-Based Novelty Detection
- Authors: Abe Ejilemele and Jorge Mendez-Mendez
- Abstract summary: A family of techniques for detecting novelty relies on thresholds of similarity between observed data points and the data used for training.
We propose a new method for automatically selecting these thresholds utilizing a linear search and leave-one-out cross-validation on the ID classes.
We demonstrate that this novel method for selecting thresholds results in improved total accuracy on MNIST, Fashion MNIST, and CIFAR-10.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When evaluated in dynamic, open-world situations, neural networks struggle to
detect unseen classes. This issue complicates the deployment of continual
learners in realistic environments where agents are not explicitly informed
when novel categories are encountered. A common family of techniques for
detecting novelty relies on thresholds of similarity between observed data
points and the data used for training. However, these methods often require
manually specifying (ahead of time) the value of these thresholds, and are
therefore incapable of adapting to the nature of the data. We propose a new
method for automatically selecting these thresholds utilizing a linear search
and leave-one-out cross-validation on the ID classes. We demonstrate that this
novel method for selecting thresholds results in improved total accuracy on
MNIST, Fashion MNIST, and CIFAR-10.
Related papers
- CONCLAD: COntinuous Novel CLAss Detector [5.857367484128867]
We present a comprehensive solution to the problem of continual novel class detection in post-deployment data.
We employ an iterative uncertainty estimation algorithm to differentiate between known and novel class(es) samples.
We will release our code upon acceptance.
arXiv Detail & Related papers (2024-12-13T01:41:28Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - A Robust Likelihood Model for Novelty Detection [8.766411351797883]
Current approaches to novelty or anomaly detection are based on deep neural networks.
We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks.
We also integrate the same prior with a state-of-the-art novelty detection approach.
arXiv Detail & Related papers (2023-06-06T01:02:31Z) - Improving novelty detection with generative adversarial networks on hand
gesture data [1.3750624267664153]
We propose a novel way of solving the issue of classification of out-of-vocabulary gestures using Artificial Neural Networks (ANNs) trained in the Generative Adversarial Network (GAN) framework.
A generative model augments the data set in an online fashion with new samples and target vectors, while a discriminative model determines the class of the samples.
arXiv Detail & Related papers (2023-04-13T17:50:15Z) - Automatic Change-Point Detection in Time Series via Deep Learning [8.43086628139493]
We show how to automatically generate new offline detection methods based on training a neural network.
We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data.
Our method also shows strong results in detecting and localising changes in activity based on accelerometer data.
arXiv Detail & Related papers (2022-11-07T20:59:14Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - Uncertainty-Aware Deep Classifiers using Generative Models [7.486679152591502]
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions.
Some recent approaches quantify uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution.
We develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions.
arXiv Detail & Related papers (2020-06-07T15:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.