Branched Variational Autoencoder Classifiers
- URL: http://arxiv.org/abs/2401.02526v1
- Date: Thu, 4 Jan 2024 20:29:44 GMT
- Title: Branched Variational Autoencoder Classifiers
- Authors: Ahmed Salah and David Yevick
- Abstract summary: This paper introduces a modified variational autoencoder (VAEs) that contains an additional neural network branch.
As a result, the latent space distributions of the input classes are separated and ordered, thereby enhancing the classification accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a modified variational autoencoder (VAEs) that contains
an additional neural network branch. The resulting branched VAE (BVAE)
contributes a classification component based on the class labels to the total
loss and therefore imparts categorical information to the latent
representation. As a result, the latent space distributions of the input
classes are separated and ordered, thereby enhancing the classification
accuracy. The degree of improvement is quantified by numerical calculations
employing the benchmark MNIST dataset for both unrotated and rotated digits.
The proposed technique is then compared to and then incorporated into a VAE
with fixed output distributions. This procedure is found to yield improved
performance for a wide range of output distributions.
Related papers
- Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Ensemble Classifier Design Tuned to Dataset Characteristics for Network
Intrusion Detection [0.0]
Two new algorithms are proposed to address the class overlap issue in the dataset.
The proposed design is evaluated for both binary and multi-category classification.
arXiv Detail & Related papers (2022-05-08T21:06:42Z) - On the rate of convergence of a classifier based on a Transformer
encoder [55.41148606254641]
The rate of convergence of the misclassification probability of the classifier towards the optimal misclassification probability is analyzed.
It is shown that this classifier is able to circumvent the curse of dimensionality provided the aposteriori probability satisfies a suitable hierarchical composition model.
arXiv Detail & Related papers (2021-11-29T14:58:29Z) - Optimising for Interpretability: Convolutional Dynamic Alignment
Networks [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which are optimised to transform their inputs with dynamically computed weight vectors that align with task-relevant patterns.
CoDA Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-09-27T12:39:46Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Convolutional Dynamic Alignment Networks for Interpretable
Classifications [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA-Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which linearly transform their input with weight vectors that dynamically align with task-relevant patterns.
CoDA-Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-03-31T18:03:53Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Diversity-Aware Weighted Majority Vote Classifier for Imbalanced Data [1.2944868613449219]
We propose a diversity-aware ensemble learning based algorithm, DAMVI, to deal with imbalanced binary classification tasks.
We show efficiency of the proposed approach with respect to state-of-art models on predictive maintenance task, credit card fraud detection, webpage classification and medical applications.
arXiv Detail & Related papers (2020-04-16T11:27:50Z) - GIM: Gaussian Isolation Machines [40.7916016364212]
In many cases, neural network classifiers are exposed to input data that is outside of their training distribution data.
We present a novel hybrid (generative-discriminative) classifier aimed at solving the problem arising when OOD data is encountered.
The proposed GIM's novelty lies in its discriminative performance and generative capabilities, a combination of characteristics not usually seen in a single classifier.
arXiv Detail & Related papers (2020-02-06T09:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.