Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification
- URL: http://arxiv.org/abs/2011.12208v1
- Date: Tue, 24 Nov 2020 17:00:30 GMT
- Title: Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification
- Authors: Pratik K. Mishra, Chandan Gautam, Aruna Tiwari
- Abstract summary: VAAKELM is a novel extension of an auto-associative kernel extreme learning machine.
It embeds minimum variance information within its architecture and reduces the intra-class variance.
It follows a reconstruction-based approach to one-class classification and minimizes the reconstruction error.
- Score: 1.4146420810689422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One-class classification (OCC) needs samples from only a single class to
train the classifier. Recently, an auto-associative kernel extreme learning
machine was developed for the OCC task. This paper introduces a novel extension
of this classifier by embedding minimum variance information within its
architecture and is referred to as VAAKELM. The minimum variance embedding
forces the network output weights to focus in regions of low variance and
reduces the intra-class variance. This leads to a better separation of target
samples and outliers, resulting in an improvement in the generalization
performance of the classifier. The proposed classifier follows a
reconstruction-based approach to OCC and minimizes the reconstruction error by
using the kernel extreme learning machine as the base classifier. It uses the
deviation in reconstruction error to identify the outliers. We perform
experiments on 15 small-size and 10 medium-size one-class benchmark datasets to
demonstrate the efficiency of the proposed classifier. We compare the results
with 13 existing one-class classifiers by considering the mean F1 score as the
comparison metric. The experimental results show that VAAKELM consistently
performs better than the existing classifiers, making it a viable alternative
for the OCC task.
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Prediction Calibration for Generalized Few-shot Semantic Segmentation [101.69940565204816]
Generalized Few-shot Semantic (GFSS) aims to segment each image pixel into either base classes with abundant training examples or novel classes with only a handful of (e.g., 1-5) training images per class.
We build a cross-attention module that guides the classifier's final prediction using the fused multi-level features.
Our PCN outperforms the state-the-art alternatives by large margins.
arXiv Detail & Related papers (2022-10-15T13:30:12Z) - Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers [12.121885324463388]
We propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL)
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD)
With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the
arXiv Detail & Related papers (2022-01-05T07:16:34Z) - Does Adversarial Oversampling Help us? [10.210871872870737]
We propose a three-player adversarial game-based end-to-end method to handle class imbalance in datasets.
Rather than adversarial minority oversampling, we propose an adversarial oversampling (AO) and a data-space oversampling (DO) approach.
The effectiveness of our proposed method has been validated with high-dimensional, highly imbalanced and large-scale multi-class datasets.
arXiv Detail & Related papers (2021-08-20T05:43:17Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Predicting Classification Accuracy When Adding New Unobserved Classes [8.325327265120283]
We study how a classifier's performance can be used to extrapolate its expected accuracy on a larger, unobserved set of classes.
We formulate a robust neural-network-based algorithm, "CleaneX", which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes.
arXiv Detail & Related papers (2020-10-28T14:37:25Z) - Learning by Minimizing the Sum of Ranked Range [58.24935359348289]
We introduce the sum of ranked range (SoRR) as a general approach to form learning objectives.
A ranked range is a consecutive sequence of sorted values of a set of real numbers.
We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification.
arXiv Detail & Related papers (2020-10-05T01:58:32Z) - Open-Set Recognition with Gaussian Mixture Variational Autoencoders [91.3247063132127]
In inference, open-set classification is to either classify a sample into a known class from training or reject it as an unknown class.
We train our model to cooperatively learn reconstruction and perform class-based clustering in the latent space.
Our model achieves more accurate and robust open-set classification results, with an average F1 improvement of 29.5%.
arXiv Detail & Related papers (2020-06-03T01:15:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.