Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective
- URL: http://arxiv.org/abs/2403.09303v3
- Date: Tue, 9 Jul 2024 01:14:41 GMT
- Title: Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective
- Authors: Yu Cai, Hao Chen, Kwang-Ting Cheng,
- Abstract summary: This study provides a theoretical foundation for AE-based reconstruction methods in anomaly detection.
By leveraging information theory, we reveal that the key to improving AE in anomaly detection lies in minimizing the information entropy of latent vectors.
This is the first effort to theoretically clarify the principles and design philosophy of AE for anomaly detection.
- Score: 27.6598870874816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical anomaly detection aims to identify abnormal findings using only normal training data, playing a crucial role in health screening and recognizing rare diseases. Reconstruction-based methods, particularly those utilizing autoencoders (AEs), are dominant in this field. They work under the assumption that AEs trained on only normal data cannot reconstruct unseen abnormal regions well, thereby enabling the anomaly detection based on reconstruction errors. However, this assumption does not always hold due to the mismatch between the reconstruction training objective and the anomaly detection task objective, rendering these methods theoretically unsound. This study focuses on providing a theoretical foundation for AE-based reconstruction methods in anomaly detection. By leveraging information theory, we elucidate the principles of these methods and reveal that the key to improving AE in anomaly detection lies in minimizing the information entropy of latent vectors. Experiments on four datasets with two image modalities validate the effectiveness of our theory. To the best of our knowledge, this is the first effort to theoretically clarify the principles and design philosophy of AE for anomaly detection. The code is available at \url{https://github.com/caiyu6666/AE4AD}.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies [17.342474659784823]
A typical approach to anomaly detection is to train an autoencoder (AE) with normal data only so that it learns the patterns or representations of the normal data.
We propose creating pseudo anomalies from learned adaptive noise by exploiting the weakness of AE, i.e., reconstructing anomalies too well.
arXiv Detail & Related papers (2024-05-09T16:22:24Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Reconstruction Error-based Anomaly Detection with Few Outlying Examples [1.011824113969195]
This work investigates approaches to allow reconstruction error-based architectures to instruct the model to put known anomalies outside of the domain description of the normal data.
Specifically, our strategy exploits a limited number of anomalous examples to increase the contrast between the reconstruction error associated with normal examples and those associated with both known and unknown anomalies.
arXiv Detail & Related papers (2023-05-17T08:20:29Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Diversity-Measurable Anomaly Detection [106.07413438216416]
We propose Diversity-Measurable Anomaly Detection (DMAD) framework to enhance reconstruction diversity.
PDM essentially decouples deformation from embedding and makes the final anomaly score more reliable.
arXiv Detail & Related papers (2023-03-09T05:52:42Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - What do we learn? Debunking the Myth of Unsupervised Outlier Detection [9.599183039166284]
We investigate what auto-encoders actually learn when they are posed to solve two different tasks.
We show that state-of-the-art (SOTA) AEs are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution.
We propose novel deformable auto-encoders (AEMorphus) to learn perceptually aware global image priors and locally adapt their morphometry.
arXiv Detail & Related papers (2022-06-08T06:36:16Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.