Non-Exhaustive Learning Using Gaussian Mixture Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2106.14344v1
- Date: Mon, 28 Jun 2021 00:20:22 GMT
- Title: Non-Exhaustive Learning Using Gaussian Mixture Generative Adversarial
Networks
- Authors: Jun Zhuang, Mohammad Al Hasan
- Abstract summary: We propose a new online non-exhaustive learning model, namely, Non-Exhaustive Gaussian Mixture Generative Adversarial Networks (NE-GM-GAN)
Our proposed model synthesizes latent representation over a deep generative model, such as GAN, for incremental detection of instances of emerging classes in the test data.
- Score: 3.040775019394542
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Supervised learning, while deployed in real-life scenarios, often encounters
instances of unknown classes. Conventional algorithms for training a supervised
learning model do not provide an option to detect such instances, so they
miss-classify such instances with 100% probability. Open Set Recognition (OSR)
and Non-Exhaustive Learning (NEL) are potential solutions to overcome this
problem. Most existing methods of OSR first classify members of existing
classes and then identify instances of new classes. However, many of the
existing methods of OSR only makes a binary decision, i.e., they only identify
the existence of the unknown class. Hence, such methods cannot distinguish test
instances belonging to incremental unseen classes. On the other hand, the
majority of NEL methods often make a parametric assumption over the data
distribution, which either fail to return good results, due to the reason that
real-life complex datasets may not follow a well-known data distribution. In
this paper, we propose a new online non-exhaustive learning model, namely,
Non-Exhaustive Gaussian Mixture Generative Adversarial Networks (NE-GM-GAN) to
address these issues. Our proposed model synthesizes Gaussian mixture based
latent representation over a deep generative model, such as GAN, for
incremental detection of instances of emerging classes in the test data.
Extensive experimental results on several benchmark datasets show that
NE-GM-GAN significantly outperforms the state-of-the-art methods in detecting
instances of novel classes in streaming data.
Related papers
- Exploring Diverse Representations for Open Set Recognition [51.39557024591446]
Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test.
Currently, generative models often perform better than discriminative models in OSR.
We propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way.
arXiv Detail & Related papers (2024-01-12T11:40:22Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Large-Scale Open-Set Classification Protocols for ImageNet [0.0]
Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios.
We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes.
We propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples.
arXiv Detail & Related papers (2022-10-13T07:01:34Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Conditional Gaussian Distribution Learning for Open Set Recognition [10.90687687505665]
We propose Conditional Gaussian Distribution Learning (CGDL) for open set recognition.
In addition to detecting unknown samples, this method can also classify known samples by forcing different latent features to approximate different Gaussian models.
Experiments on several standard image reveal that the proposed method significantly outperforms the baseline method and achieves new state-of-the-art results.
arXiv Detail & Related papers (2020-03-19T14:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.