Connective Reconstruction-based Novelty Detection
- URL: http://arxiv.org/abs/2210.13917v1
- Date: Tue, 25 Oct 2022 11:09:39 GMT
- Title: Connective Reconstruction-based Novelty Detection
- Authors: Seyyed Morteza Hashemi, Parvaneh Aliniya, Parvin Razzaghi
- Abstract summary: Deep learning has enabled us to analyze real-world data which contain unexplained samples.
GAN-based approaches have been widely used to address this problem due to their ability to perform distribution fitting.
We propose a simple yet efficient reconstruction-based method that avoids adding complexities to compensate for the limitations of GAN models.
- Score: 3.7706789983985303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection of out-of-distribution samples is one of the critical tasks for
real-world applications of computer vision. The advancement of deep learning
has enabled us to analyze real-world data which contain unexplained samples,
accentuating the need to detect out-of-distribution instances more than before.
GAN-based approaches have been widely used to address this problem due to their
ability to perform distribution fitting; however, they are accompanied by
training instability and mode collapse. We propose a simple yet efficient
reconstruction-based method that avoids adding complexities to compensate for
the limitations of GAN models while outperforming them. Unlike previous
reconstruction-based works that only utilize reconstruction error or generated
samples, our proposed method simultaneously incorporates both of them in the
detection task. Our model, which we call "Connective Novelty Detection" has two
subnetworks, an autoencoder, and a binary classifier. The autoencoder learns
the representation of the positive class by reconstructing them. Then, the
model creates negative and connected positive examples using real and generated
samples. Negative instances are generated via manipulating the real data, so
their distribution is close to the positive class to achieve a more accurate
boundary for the classifier. To boost the robustness of the detection to
reconstruction error, connected positive samples are created by combining the
real and generated samples. Finally, the binary classifier is trained using
connected positive and negative examples. We demonstrate a considerable
improvement in novelty detection over state-of-the-art methods on MNIST and
Caltech-256 datasets.
Related papers
- Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection [30.02748131967826]
Unsupervised out-of-distribution (OOD) detection aims to identify out-of-domain data by learning only from unlabeled In-Distribution (ID) training samples.
Current reconstruction-based methods provide a good alternative approach by measuring the reconstruction error between the input and its corresponding generative counterpart in the pixel/feature space.
We propose the diffusion-based layer-wise semantic reconstruction approach for unsupervised OOD detection.
arXiv Detail & Related papers (2024-11-16T04:54:07Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Anomaly Detection with Ensemble of Encoder and Decoder [2.8199078343161266]
Anomaly detection in power grids aims to detect and discriminate anomalies caused by cyber attacks against the power system.
We propose a novel anomaly detection method by modeling the data distribution of normal samples via multiple encoders and decoders.
Experiment results on network intrusion and power system datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-03-11T15:49:29Z) - READ: Aggregating Reconstruction Error into Out-of-distribution
Detection [5.069442437365223]
Deep neural networks are known to be overconfident for abnormal data.
We propose READ (Reconstruction Error Aggregated Detector) to unify inconsistencies from classifier and autoencoder.
Our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art OOD detection algorithms.
arXiv Detail & Related papers (2022-06-15T11:30:41Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.