SHELS: Exclusive Feature Sets for Novelty Detection and Continual
Learning Without Class Boundaries
- URL: http://arxiv.org/abs/2206.13720v1
- Date: Tue, 28 Jun 2022 03:09:55 GMT
- Title: SHELS: Exclusive Feature Sets for Novelty Detection and Continual
Learning Without Class Boundaries
- Authors: Meghna Gummadi, David Kent, Jorge A. Mendez and Eric Eaton
- Abstract summary: We introduce a Sparse High-level-Exclusive, Low-level-Shared feature representation (SHELS)
SHELS encourages learning exclusive sets of high-level features and essential, shared low-level features.
We show that using SHELS for novelty detection results in statistically significant improvements over state-of-the-art OOD detection approaches.
- Score: 22.04165296584446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks (DNNs) have achieved impressive classification
performance in closed-world learning scenarios, they typically fail to
generalize to unseen categories in dynamic open-world environments, in which
the number of concepts is unbounded. In contrast, human and animal learners
have the ability to incrementally update their knowledge by recognizing and
adapting to novel observations. In particular, humans characterize concepts via
exclusive (unique) sets of essential features, which are used for both
recognizing known classes and identifying novelty. Inspired by natural
learners, we introduce a Sparse High-level-Exclusive, Low-level-Shared feature
representation (SHELS) that simultaneously encourages learning exclusive sets
of high-level features and essential, shared low-level features. The
exclusivity of the high-level features enables the DNN to automatically detect
out-of-distribution (OOD) data, while the efficient use of capacity via sparse
low-level features permits accommodating new knowledge. The resulting approach
uses OOD detection to perform class-incremental continual learning without
known class boundaries. We show that using SHELS for novelty detection results
in statistically significant improvements over state-of-the-art OOD detection
approaches over a variety of benchmark datasets. Further, we demonstrate that
the SHELS model mitigates catastrophic forgetting in a class-incremental
learning setting,enabling a combined novelty detection and accommodation
framework that supports learning in open-world settings
Related papers
- Deep Active Learning in the Open World [13.2318584850986]
Machine learning models deployed in open-world scenarios often encounter unfamiliar conditions and perform poorly in unanticipated situations.
We introduce ALOE, a novel active learning algorithm for open-world environments designed to enhance model adaptation by incorporating new OOD classes.
Our findings reveal a crucial tradeoff between enhancing known-class performance and discovering new classes, setting the stage for future advancements in open-world machine learning.
arXiv Detail & Related papers (2024-11-10T04:04:20Z) - High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Incremental Object Detection with CLIP [36.478530086163744]
We propose a visual-language model such as CLIP to generate text feature embeddings for different class sets.
We then employ super-classes to replace the unavailable novel classes in the early learning stage to simulate the incremental scenario.
We incorporate the finely recognized detection boxes as pseudo-annotations into the training process, thereby further improving the detection performance.
arXiv Detail & Related papers (2023-10-13T01:59:39Z) - Evolving Knowledge Mining for Class Incremental Segmentation [113.59611699693092]
Class Incremental Semantic (CISS) has been a trend recently due to its great significance in real-world applications.
We propose a novel method, Evolving kNowleDge minING, employing a frozen backbone.
We evaluate our method on two widely used benchmarks and consistently demonstrate new state-of-the-art performance.
arXiv Detail & Related papers (2023-06-03T07:03:15Z) - Class-Specific Semantic Reconstruction for Open Set Recognition [101.24781422480406]
Open set recognition enables deep neural networks (DNNs) to identify samples of unknown classes.
We propose a novel method, called Class-Specific Semantic Reconstruction (CSSR), that integrates the power of auto-encoder (AE) and prototype learning.
Results of experiments conducted on multiple datasets show that the proposed method achieves outstanding performance in both close and open set recognition.
arXiv Detail & Related papers (2022-07-05T16:25:34Z) - Spatio-temporal Relation Modeling for Few-shot Action Recognition [100.3999454780478]
We propose a few-shot action recognition framework, STRM, which enhances class-specific featureriminability while simultaneously learning higher-order temporal representations.
Our approach achieves an absolute gain of 3.5% in classification accuracy, as compared to the best existing method in the literature.
arXiv Detail & Related papers (2021-12-09T18:59:14Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - ConCAD: Contrastive Learning-based Cross Attention for Sleep Apnea
Detection [16.938983046369263]
We propose a contrastive learning-based cross attention framework for sleep apnea detection (named ConCAD)
Our proposed framework can be easily integrated into standard deep learning models to utilize expert knowledge and contrastive learning to boost performance.
arXiv Detail & Related papers (2021-05-07T02:38:56Z) - Collective Decision of One-vs-Rest Networks for Open Set Recognition [0.0]
We propose a simple open set recognition (OSR) method based on the intuition that OSR performance can be maximized by setting strict and sophisticated decision boundaries.
The proposed method performed significantly better than the state-of-the-art methods by effectively reducing overgeneralization.
arXiv Detail & Related papers (2021-03-18T13:06:46Z) - SEKD: Self-Evolving Keypoint Detection and Description [42.114065439674036]
We propose a self-supervised framework to learn an advanced local feature model from unlabeled natural images.
We benchmark the proposed method on homography estimation, relative pose estimation, and structure-from-motion tasks.
We will release our code along with the trained model publicly.
arXiv Detail & Related papers (2020-06-09T06:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.