Projected Sliced Wasserstein Autoencoder-based Hyperspectral Images
Anomaly Detection
- URL: http://arxiv.org/abs/2112.11243v2
- Date: Wed, 22 Dec 2021 06:37:44 GMT
- Title: Projected Sliced Wasserstein Autoencoder-based Hyperspectral Images
Anomaly Detection
- Authors: Yurong Chen, Hui Zhang, Yaonan Wang, Q. M. Jonathan Wu, Yimin Yang
- Abstract summary: We propose the Projected Sliced Wasserstein (PSW) autoencoder-based anomaly detection method.
In particular, the computation-friendly eigen-decomposition method is leveraged to find the principal component for slicing the high-dimensional data.
Comprehensive experiments conducted on various real-world hyperspectral anomaly detection benchmarks demonstrate the superior performance of the proposed method.
- Score: 42.585075865267946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection (AD) has been an active research area in various domains.
Yet, the increasing data scale, complexity, and dimension turn the traditional
methods into challenging. Recently, the deep generative model, such as the
variational autoencoder (VAE), has sparked a renewed interest in the AD
problem. However, the probability distribution divergence used as the
regularization is too strong, which causes the model cannot capture the
manifold of the true data. In this paper, we propose the Projected Sliced
Wasserstein (PSW) autoencoder-based anomaly detection method. Rooted in the
optimal transportation, the PSW distance is a weaker distribution measure
compared with $f$-divergence. In particular, the computation-friendly
eigen-decomposition method is leveraged to find the principal component for
slicing the high-dimensional data. In this case, the Wasserstein distance can
be calculated with the closed-form, even the prior distribution is not
Gaussian. Comprehensive experiments conducted on various real-world
hyperspectral anomaly detection benchmarks demonstrate the superior performance
of the proposed method.
Related papers
- Stochastic Functional Analysis and Multilevel Vector Field Anomaly
Detection [0.0]
We develop a novel analysis approach for detecting anomalies in massive vector field datasets.
An optimal vector field Karhunen-Loeve (KL) expansion is applied to such random field data.
The method is applied to the problem of deforestation and degradation in the Amazon forest.
arXiv Detail & Related papers (2022-07-11T13:11:16Z) - Unsupervised Anomaly and Change Detection with Multivariate
Gaussianization [8.508880949780893]
Anomaly detection is a challenging problem given the high-dimensionality of the data.
We propose an unsupervised method for detecting anomalies and changes in remote sensing images.
We show the efficiency of the method in experiments involving both anomaly detection and change detection.
arXiv Detail & Related papers (2022-04-12T10:52:33Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly
Detection [9.19194451963411]
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data.
We propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation.
arXiv Detail & Related papers (2021-06-09T21:57:41Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z) - Anomaly Detection in Trajectory Data with Normalizing Flows [0.0]
We propose an approach based on normalizing flows that enables complex density estimation from data with neural networks.
Our proposal computes exact model likelihood values, an important feature of normalizing flows, for each segment of the trajectory.
We evaluate our methodology, named aggregated anomaly detection with normalizing flows (GRADINGS), using real world trajectory data and compare it with more traditional anomaly detection techniques.
arXiv Detail & Related papers (2020-04-13T14:16:40Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z) - Towards Out-of-Distribution Detection with Divergence Guarantee in Deep
Generative Models [22.697643259435115]
Deep generative models may assign higher likelihood to out-of-distribution (OOD) data than in-distribution (ID) data.
We prove theorems to investigate the divergences in flow-based model.
We propose two group anomaly detection methods.
arXiv Detail & Related papers (2020-02-09T09:54:12Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.