Hyperspherical embedding for novel class classification
- URL: http://arxiv.org/abs/2102.03243v1
- Date: Fri, 5 Feb 2021 15:42:13 GMT
- Title: Hyperspherical embedding for novel class classification
- Authors: Rafael S. Pereira, Alexis Joly, Patrick Valduriez, Fabio Porto
- Abstract summary: We present a constraint-based approach applied to representations in the latent space under the normalized softmax loss.
We experimentally validate the proposed approach for the classification of unseen classes on different datasets using both metric learning and the normalized softmax loss.
Our results show that not only our proposed strategy can be efficiently trained on larger set of classes, as it does not require pairwise learning, but also present better classification results than the metric learning strategies.
- Score: 1.5952956981784217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models have become increasingly useful in many different
industries. On the domain of image classification, convolutional neural
networks proved the ability to learn robust features for the closed set
problem, as shown in many different datasets, such as MNIST FASHIONMNIST,
CIFAR10, CIFAR100, and IMAGENET. These approaches use deep neural networks with
dense layers with softmax activation functions in order to learn features that
can separate classes in a latent space. However, this traditional approach is
not useful for identifying classes unseen on the training set, known as the
open set problem. A similar problem occurs in scenarios involving learning on
small data. To tackle both problems, few-shot learning has been proposed. In
particular, metric learning learns features that obey constraints of a metric
distance in the latent space in order to perform classification. However, while
this approach proves to be useful for the open set problem, current
implementation requires pair-wise training, where both positive and negative
examples of similar images are presented during the training phase, which
limits the applicability of these approaches in large data or large class
scenarios given the combinatorial nature of the possible inputs.In this paper,
we present a constraint-based approach applied to the representations in the
latent space under the normalized softmax loss, proposed by[18]. We
experimentally validate the proposed approach for the classification of unseen
classes on different datasets using both metric learning and the normalized
softmax loss, on disjoint and joint scenarios. Our results show that not only
our proposed strategy can be efficiently trained on larger set of classes, as
it does not require pairwise learning, but also present better classification
results than the metric learning strategies surpassing its accuracy by a
significant margin.
Related papers
- Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Maximally Compact and Separated Features with Regular Polytope Networks [22.376196701232388]
We show how to extract from CNNs features the properties of emphmaximum inter-class separability and emphmaximum intra-class compactness.
We obtain features similar to what can be obtained with the well-known citewen2016discriminative and other similar approaches.
arXiv Detail & Related papers (2023-01-15T15:20:57Z) - Dominant Set-based Active Learning for Text Classification and its
Application to Online Social Media [0.0]
We present a novel pool-based active learning method for the training of large unlabeled corpus with minimum annotation cost.
Our proposed method does not have any parameters to be tuned, making it dataset-independent.
Our method achieves a higher performance in comparison to the state-of-the-art active learning strategies.
arXiv Detail & Related papers (2022-01-28T19:19:03Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Beyond cross-entropy: learning highly separable feature distributions
for robust and accurate classification [22.806324361016863]
We propose a novel approach for training deep robust multiclass classifiers that provides adversarial robustness.
We show that the regularization of the latent space based on our approach yields excellent classification accuracy.
arXiv Detail & Related papers (2020-10-29T11:15:17Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Few-Shot Open-Set Recognition using Meta-Learning [72.15940446408824]
The problem of open-set recognition is considered.
A new oPen sEt mEta LEaRning (PEELER) algorithm is introduced.
arXiv Detail & Related papers (2020-05-27T23:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.