Anomaly Detection in Image Datasets Using Convolutional Neural Networks,
Center Loss, and Mahalanobis Distance
- URL: http://arxiv.org/abs/2104.06193v1
- Date: Tue, 13 Apr 2021 13:44:03 GMT
- Title: Anomaly Detection in Image Datasets Using Convolutional Neural Networks,
Center Loss, and Mahalanobis Distance
- Authors: Garnik Vareldzhan, Kirill Yurkov, Konstantin Ushenin
- Abstract summary: User activities generate a significant number of poor-quality or irrelevant images and data vectors.
For neural networks, the anomalous is usually defined as out-of-distribution samples.
This work proposes methods for supervised and semi-supervised detection of out-of-distribution samples in image datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User activities generate a significant number of poor-quality or irrelevant
images and data vectors that cannot be processed in the main data processing
pipeline or included in the training dataset. Such samples can be found with
manual analysis by an expert or with anomalous detection algorithms. There are
several formal definitions for the anomaly samples. For neural networks, the
anomalous is usually defined as out-of-distribution samples. This work proposes
methods for supervised and semi-supervised detection of out-of-distribution
samples in image datasets. Our approach extends a typical neural network that
solves the image classification problem. Thus, one neural network after
extension can solve image classification and anomalous detection problems
simultaneously. Proposed methods are based on the center loss and its effect on
a deep feature distribution in a last hidden layer of the neural network. This
paper provides an analysis of the proposed methods for the LeNet and
EfficientNet-B0 on the MNIST and ImageNet-30 datasets.
Related papers
- On the Convergence of Locally Adaptive and Scalable Diffusion-Based Sampling Methods for Deep Bayesian Neural Network Posteriors [2.3265565167163906]
Bayesian neural networks are a promising approach for modeling uncertainties in deep neural networks.
generating samples from the posterior distribution of neural networks is a major challenge.
One advance in that direction would be the incorporation of adaptive step sizes into Monte Carlo Markov chain sampling algorithms.
In this paper, we demonstrate that these methods can have a substantial bias in the distribution they sample, even in the limit of vanishing step sizes and at full batch size.
arXiv Detail & Related papers (2024-03-13T15:21:14Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection [6.508649912734565]
We propose a novel anomaly detection (AD) approach of High-speed Train images based on convolutional neural networks and the Vision Transformer.
The proposed method detects abnormal difference between two images taken at different times of the same region.
arXiv Detail & Related papers (2022-03-29T02:24:58Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive
Compression [40.35734017517066]
Nested networks or slimmable networks are neural networks whose architectures can be adjusted instantly during testing time.
Recent studies have focused on a "nested dropout" layer, which is able to order the nodes of a layer by importance during training.
arXiv Detail & Related papers (2021-01-27T12:34:58Z) - Image Anomaly Detection by Aggregating Deep Pyramidal Representations [16.246831343527052]
Anomaly detection consists in identifying, within a dataset, those samples that significantly differ from the majority of the data.
This paper focuses on image anomaly detection using a deep neural network with multiple pyramid levels to analyze the image features at different scales.
arXiv Detail & Related papers (2020-11-12T09:58:27Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - $\text{A}^3$: Activation Anomaly Analysis [0.7734726150561088]
We show that the hidden activation values contain information useful to distinguish between normal and anomalous samples.
Our approach combines three neural networks in a purely data-driven end-to-end model.
Thanks to the anomaly network, our method even works in strict semi-supervised settings.
arXiv Detail & Related papers (2020-03-03T21:23:56Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.