Latent Vector Expansion using Autoencoder for Anomaly Detection
- URL: http://arxiv.org/abs/2201.01416v1
- Date: Wed, 5 Jan 2022 02:28:38 GMT
- Title: Latent Vector Expansion using Autoencoder for Anomaly Detection
- Authors: UJu Gim, YeongHyeon Park
- Abstract summary: We use the features of the autoencoder to train latent vectors from low to high dimensionality.
We propose a latent vector expansion autoencoder model that improves classification performance at imbalanced data.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods can classify various unstructured data such as images,
language, and voice as input data. As the task of classifying anomalies becomes
more important in the real world, various methods exist for classifying using
deep learning with data collected in the real world. As the task of classifying
anomalies becomes more important in the real world, there are various methods
for classifying using deep learning with data collected in the real world.
Among the various methods, the representative approach is a method of
extracting and learning the main features based on a transition model from
pre-trained models, and a method of learning an autoencoderbased structure only
with normal data and classifying it as abnormal through a threshold value.
However, if the dataset is imbalanced, even the state-of-the-arts models do not
achieve good performance. This can be addressed by augmenting normal and
abnormal features in imbalanced data as features with strong distinction. We
use the features of the autoencoder to train latent vectors from low to high
dimensionality. We train normal and abnormal data as a feature that has a
strong distinction among the features of imbalanced data. We propose a latent
vector expansion autoencoder model that improves classification performance at
imbalanced data. The proposed method shows performance improvement compared to
the basic autoencoder using imbalanced anomaly dataset.
Related papers
- Approaching Metaheuristic Deep Learning Combos for Automated Data Mining [0.5419570023862531]
This work proposes a means of combining meta-heuristic methods with conventional classifiers and neural networks in order to perform automated data mining.
Experiments on the MNIST dataset for handwritten digit recognition were performed.
It was empirically observed that using a ground truth labeled dataset's validation accuracy is inadequate for correcting labels of other previously unseen data instances.
arXiv Detail & Related papers (2024-10-16T10:28:22Z) - Anomalous Sound Detection Using a Binary Classification Model and Class
Centroids [47.856367556856554]
We propose a binary classification model that is developed by using not only normal data but also outlier data in the other domains as pseudo-anomalous sound data.
We also investigate the effectiveness of additionally using anomalous sound data for further improving the binary classification model.
arXiv Detail & Related papers (2021-06-11T03:35:06Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Initial Classifier Weights Replay for Memoryless Class Incremental
Learning [11.230170401360633]
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times.
We propose a different approach based on a vanilla fine tuning backbone.
We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting.
arXiv Detail & Related papers (2020-08-31T16:18:12Z) - Self-Attentive Classification-Based Anomaly Detection in Unstructured
Logs [59.04636530383049]
We propose Logsy, a classification-based method to learn log representations.
We show an average improvement of 0.25 in the F1 score, compared to the previous methods.
arXiv Detail & Related papers (2020-08-21T07:26:55Z) - Establishing strong imputation performance of a denoising autoencoder in
a wide range of missing data problems [0.0]
We develop a consistent framework for both training and imputation.
We benchmarked the results against state-of-the-art imputation methods.
The developed autoencoder obtained the smallest error for all ranges of initial data corruption.
arXiv Detail & Related papers (2020-04-06T12:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.