Deep Simplex Classifier for Maximizing the Margin in Both Euclidean and
Angular Spaces
- URL: http://arxiv.org/abs/2212.11747v1
- Date: Thu, 22 Dec 2022 14:37:47 GMT
- Title: Deep Simplex Classifier for Maximizing the Margin in Both Euclidean and
Angular Spaces
- Authors: Hakan Cevikalp, Hasan Saribas
- Abstract summary: This paper introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time.
The proposed method achieves the state-of-the-art accuracies on open set recognition despite its simplicity.
- Score: 18.309720578916146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The classification loss functions used in deep neural network classifiers can
be grouped into two categories based on maximizing the margin in either
Euclidean or angular spaces. Euclidean distances between sample vectors are
used during classification for the methods maximizing the margin in Euclidean
spaces whereas the Cosine similarity distance is used during the testing stage
for the methods maximizing margin in the angular spaces. This paper introduces
a novel classification loss that maximizes the margin in both the Euclidean and
angular spaces at the same time. This way, the Euclidean and Cosine distances
will produce similar and consistent results and complement each other, which
will in turn improve the accuracies. The proposed loss function enforces the
samples of classes to cluster around the centers that represent them. The
centers approximating classes are chosen from the boundary of a hypersphere,
and the pairwise distances between class centers are always equivalent. This
restriction corresponds to choosing centers from the vertices of a regular
simplex. There is not any hyperparameter that must be set by the user in the
proposed loss function, therefore the use of the proposed method is extremely
easy for classical classification problems. Moreover, since the class samples
are compactly clustered around their corresponding means, the proposed
classifier is also very suitable for open set recognition problems where test
samples can come from the unknown classes that are not seen in the training
phase. Experimental studies show that the proposed method achieves the
state-of-the-art accuracies on open set recognition despite its simplicity.
Related papers
- A Generic Method for Fine-grained Category Discovery in Natural Language Texts [38.297873969795546]
We introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function.
The method uses semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space.
We also propose a centroid inference mechanism to support real-time applications.
arXiv Detail & Related papers (2024-06-18T23:27:46Z) - Canonical Variates in Wasserstein Metric Space [16.668946904062032]
We employ the Wasserstein metric to measure distances between distributions, which are then used by distance-based classification algorithms.
Central to our investigation is dimension reduction within the Wasserstein metric space to enhance classification accuracy.
We introduce a novel approach grounded in the principle of maximizing Fisher's ratio, defined as the quotient of between-class variation to within-class variation.
arXiv Detail & Related papers (2024-05-24T17:59:21Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Feature Selection using Sparse Adaptive Bottleneck Centroid-Encoder [1.2487990897680423]
We introduce a novel nonlinear model, Sparse Adaptive Bottleneckid-Encoder (SABCE), for determining the features that discriminate between two or more classes.
The algorithm is applied to various real-world data sets, including high-dimensional biological, image, speech, and accelerometer sensor data.
arXiv Detail & Related papers (2023-06-07T21:37:21Z) - Gradient Based Clustering [72.15857783681658]
We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality.
The approach is an iterative two step procedure (alternating between cluster assignment and cluster center updates) and is applicable to a wide range of functions.
arXiv Detail & Related papers (2022-02-01T19:31:15Z) - Anomaly Clustering: Grouping Images into Coherent Clusters of Anomaly
Types [60.45942774425782]
We introduce anomaly clustering, whose goal is to group data into coherent clusters of anomaly types.
This is different from anomaly detection, whose goal is to divide anomalies from normal data.
We present a simple yet effective clustering framework using a patch-based pretrained deep embeddings and off-the-shelf clustering methods.
arXiv Detail & Related papers (2021-12-21T23:11:33Z) - Hyperdimensional Computing for Efficient Distributed Classification with
Randomized Neural Networks [5.942847925681103]
We study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared.
We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents.
arXiv Detail & Related papers (2021-06-02T01:33:56Z) - Deep Compact Polyhedral Conic Classifier for Open and Closed Set
Recognition [17.86376652494798]
The proposed method has a nice interpretation using polyhedral conic function geometry.
The experimental results show that the proposed method typically outperforms other state-of-the art methods.
arXiv Detail & Related papers (2021-02-24T21:38:31Z) - A Boundary Based Out-of-Distribution Classifier for Generalized
Zero-Shot Learning [83.1490247844899]
Generalized Zero-Shot Learning (GZSL) is a challenging topic that has promising prospects in many realistic scenarios.
We propose a boundary based Out-of-Distribution (OOD) classifier which classifies the unseen and seen domains by only using seen samples for training.
We extensively validate our approach on five popular benchmark datasets including AWA1, AWA2, CUB, FLO and SUN.
arXiv Detail & Related papers (2020-08-09T11:27:19Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Rethinking preventing class-collapsing in metric learning with
margin-based losses [81.22825616879936]
Metric learning seeks embeddings where visually similar instances are close and dissimilar instances are apart.
margin-based losses tend to project all samples of a class onto a single point in the embedding space.
We propose a simple modification to the embedding losses such that each sample selects its nearest same-class counterpart in a batch.
arXiv Detail & Related papers (2020-06-09T09:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.