Anomaly Detection With Partitioning Overfitting Autoencoder Ensembles
- URL: http://arxiv.org/abs/2009.02755v8
- Date: Tue, 28 Sep 2021 09:14:49 GMT
- Title: Anomaly Detection With Partitioning Overfitting Autoencoder Ensembles
- Authors: Boris Lorbeer, Max Botler
- Abstract summary: We propose POTATOES (Partitioning OverfiTting AuTOencoder EnSemble), a new method for unsupervised outlier detection (UOD)
The idea is to not regularize at all, but to rather randomly partition the data into sufficiently many equally sized parts, overfit each part with its own autoencoder, and to use the maximum over all autoencoder reconstruction errors as the anomaly score.
For indeed, our method is made available on so the reader can recreate the results in this paper as well as apply the method to other autoencoders and datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose POTATOES (Partitioning OverfiTting AuTOencoder
EnSemble), a new method for unsupervised outlier detection (UOD). More
precisely, given any autoencoder for UOD, this technique can be used to improve
its accuracy while at the same time removing the burden of tuning its
regularization. The idea is to not regularize at all, but to rather randomly
partition the data into sufficiently many equally sized parts, overfit each
part with its own autoencoder, and to use the maximum over all autoencoder
reconstruction errors as the anomaly score. We apply our model to various
realistic datasets and show that if the set of inliers is dense enough, our
method indeed improves the UOD performance of a given autoencoder
significantly. For reproducibility, the code is made available on github so the
reader can recreate the results in this paper as well as apply the method to
other autoencoders and datasets.
Related papers
- Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation [53.06337011259031]
We introduce UnFuSeD, a novel approach to leverage self-supervised learning for audio classification.
We use the encoder to generate pseudo-labels for unsupervised fine-tuning before the actual fine-tuning step.
UnFuSeD achieves state-of-the-art results on the LAPE Benchmark, significantly outperforming all our baselines.
arXiv Detail & Related papers (2023-03-10T02:43:36Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - When Counting Meets HMER: Counting-Aware Network for Handwritten
Mathematical Expression Recognition [57.51793420986745]
We propose an unconventional network for handwritten mathematical expression recognition (HMER) named Counting-Aware Network (CAN)
We design a weakly-supervised counting module that can predict the number of each symbol class without the symbol-level position annotations.
Experiments on the benchmark datasets for HMER validate that both joint optimization and counting results are beneficial for correcting the prediction errors of encoder-decoder models.
arXiv Detail & Related papers (2022-07-23T08:39:32Z) - Hyperdecoders: Instance-specific decoders for multi-task NLP [9.244884318445413]
We investigate input-conditioned hypernetworks for multi-tasking in NLP.
We generate parameter-efficient adaptations for a decoder using a hypernetwork conditioned on the output of an encoder.
arXiv Detail & Related papers (2022-03-15T22:39:53Z) - Neural Distributed Source Coding [59.630059301226474]
We present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions.
We evaluate our method on multiple datasets and show that our method can handle complex correlations and state-of-the-art PSNR.
arXiv Detail & Related papers (2021-06-05T04:50:43Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - End-to-end Sinkhorn Autoencoder with Noise Generator [10.008055997630304]
We propose a novel end-to-end sinkhorn autoencoder with noise generator for efficient data collection simulation.
Our method outperforms competing approaches on a challenging dataset of simulation data from Zero Degree Calorimeters of ALICE experiment in LHC.
arXiv Detail & Related papers (2020-06-11T18:04:10Z) - Anomaly Detection with SDAE [2.9447568514391067]
A Simple, Deep, and Supervised Deep Autoencoder were trained and compared for anomaly detection over the ASHRAE building energy dataset.
The Deep Autoencoder perfoms the best, however the Supervised Deep Autoencoder outperforms the other models in total anomalies detected.
arXiv Detail & Related papers (2020-04-09T07:22:08Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.