Large-Scale Open-Set Classification Protocols for ImageNet
- URL: http://arxiv.org/abs/2210.06789v1
- Date: Thu, 13 Oct 2022 07:01:34 GMT
- Title: Large-Scale Open-Set Classification Protocols for ImageNet
- Authors: Jesus Andres Palechor Anacona, Annesha Bhoumik, Manuel G\"unther
- Abstract summary: Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios.
We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes.
We propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-Set Classification (OSC) intends to adapt closed-set classification
models to real-world scenarios, where the classifier must correctly label
samples of known classes while rejecting previously unseen unknown samples.
Only recently, research started to investigate on algorithms that are able to
handle these unknown samples correctly. Some of these approaches address OSC by
including into the training set negative samples that a classifier learns to
reject, expecting that these data increase the robustness of the classifier on
unknown classes. Most of these approaches are evaluated on small-scale and
low-resolution image datasets like MNIST, SVHN or CIFAR, which makes it
difficult to assess their applicability to the real world, and to compare them
among each other. We propose three open-set protocols that provide rich
datasets of natural images with different levels of similarity between known
and unknown classes. The protocols consist of subsets of ImageNet classes
selected to provide training and testing data closer to real-world scenarios.
Additionally, we propose a new validation metric that can be employed to assess
whether the training of deep learning models addresses both the classification
of known samples and the rejection of unknown samples. We use the protocols to
compare the performance of two baseline open-set algorithms to the standard
SoftMax baseline and find that the algorithms work well on negative samples
that have been seen during training, and partially on out-of-distribution
detection tasks, but drop performance in the presence of samples from
previously unseen unknown classes.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Large-Scale Evaluation of Open-Set Image Classification Techniques [1.1249583407496218]
Open-Set Classification (OSC) algorithms aim to maximize both closed and open-set recognition capabilities.
Recent studies showed the utility of such algorithms on small-scale data sets, but limited experimentation makes it difficult to assess their performances in real-world problems.
arXiv Detail & Related papers (2024-06-13T13:43:01Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Open-set Recognition via Augmentation-based Similarity Learning [11.706887820422002]
We propose to detect unknowns (or unseen class samples) through learning pairwise similarities.
We call our method OPG (Open set recognition based on Pseudo unseen data Generation)
arXiv Detail & Related papers (2022-03-24T17:49:38Z) - Reconstruction guided Meta-learning for Few Shot Open Set Recognition [31.49168444631114]
We propose Reconstructing Exemplar-based Few-shot Open-set ClaSsifier (ReFOCS)
By using a novel exemplar reconstruction-based meta-learning strategy ReFOCS streamlines FSOSR.
We show ReFOCS to outperform multiple state-of-the-art methods.
arXiv Detail & Related papers (2021-07-31T23:23:35Z) - Non-Exhaustive Learning Using Gaussian Mixture Generative Adversarial
Networks [3.040775019394542]
We propose a new online non-exhaustive learning model, namely, Non-Exhaustive Gaussian Mixture Generative Adversarial Networks (NE-GM-GAN)
Our proposed model synthesizes latent representation over a deep generative model, such as GAN, for incremental detection of instances of emerging classes in the test data.
arXiv Detail & Related papers (2021-06-28T00:20:22Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Conditional Variational Capsule Network for Open Set Recognition [64.18600886936557]
In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
arXiv Detail & Related papers (2021-04-19T09:39:30Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Conditional Gaussian Distribution Learning for Open Set Recognition [10.90687687505665]
We propose Conditional Gaussian Distribution Learning (CGDL) for open set recognition.
In addition to detecting unknown samples, this method can also classify known samples by forcing different latent features to approximate different Gaussian models.
Experiments on several standard image reveal that the proposed method significantly outperforms the baseline method and achieves new state-of-the-art results.
arXiv Detail & Related papers (2020-03-19T14:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.