Transformation Based Deep Anomaly Detection in Astronomical Images
- URL: http://arxiv.org/abs/2005.07779v1
- Date: Fri, 15 May 2020 21:02:12 GMT
- Title: Transformation Based Deep Anomaly Detection in Astronomical Images
- Authors: Esteban Reyes, Pablo A. Est\'evez
- Abstract summary: We introduce new filter based transformations useful for detecting anomalies in astronomical images.
We also propose a transformation selection strategy that allows us to find indistinguishable pairs of transformations.
The models were tested on astronomical images from the High Cadence Transient Survey (HiTS) and Zwicky Transient Facility (ZTF) datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose several enhancements to a geometric transformation
based model for anomaly detection in images (GeoTranform). The model assumes
that the anomaly class is unknown and that only inlier samples are available
for training. We introduce new filter based transformations useful for
detecting anomalies in astronomical images, that highlight artifact properties
to make them more easily distinguishable from real objects. In addition, we
propose a transformation selection strategy that allows us to find
indistinguishable pairs of transformations. This results in an improvement of
the area under the Receiver Operating Characteristic curve (AUROC) and accuracy
performance, as well as in a dimensionality reduction. The models were tested
on astronomical images from the High Cadence Transient Survey (HiTS) and Zwicky
Transient Facility (ZTF) datasets. The best models obtained an average AUROC of
99.20% for HiTS and 91.39% for ZTF. The improvement over the original
GeoTransform algorithm and baseline methods such as One-Class Support Vector
Machine, and deep learning based methods is significant both statistically and
in practice.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - ISSTAD: Incremental Self-Supervised Learning Based on Transformer for
Anomaly Detection and Localization [12.975540251326683]
We introduce a novel approach based on the Transformer backbone network.
We train a Masked Autoencoder (MAE) model solely on normal images.
In the subsequent stage, we apply pixel-level data augmentation techniques to generate corrupted normal images.
This process allows the model to learn how to repair corrupted regions and classify the status of each pixel.
arXiv Detail & Related papers (2023-03-30T13:11:26Z) - GradViT: Gradient Inversion of Vision Transformers [83.54779732309653]
We demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks.
We introduce a method, named GradViT, that optimize random noise into naturally looking images.
We observe unprecedentedly high fidelity and closeness to the original (hidden) data.
arXiv Detail & Related papers (2022-03-22T17:06:07Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - Extracting Deformation-Aware Local Features by Learning to Deform [3.364554138758565]
We present a new approach to compute features from still images that are robust to non-rigid deformations.
We train the model architecture end-to-end by applying non-rigid deformations to objects in a simulated environment.
Experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets.
arXiv Detail & Related papers (2021-11-20T15:46:33Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.