Deep Residual Flow for Out of Distribution Detection
- URL: http://arxiv.org/abs/2001.05419v3
- Date: Sun, 19 Jul 2020 17:44:12 GMT
- Title: Deep Residual Flow for Out of Distribution Detection
- Authors: Ev Zisselman and Aviv Tamar
- Abstract summary: We present a novel approach that improves upon the state-of-the-art by leveraging an expressive density model based on normalizing flows.
We demonstrate the effectiveness of our method in ResNet and DenseNet architectures trained on various image datasets.
- Score: 27.218308616245164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The effective application of neural networks in the real-world relies on
proficiently detecting out-of-distribution examples. Contemporary methods seek
to model the distribution of feature activations in the training data for
adequately distinguishing abnormalities, and the state-of-the-art method uses
Gaussian distribution models. In this work, we present a novel approach that
improves upon the state-of-the-art by leveraging an expressive density model
based on normalizing flows. We introduce the residual flow, a novel flow
architecture that learns the residual distribution from a base Gaussian
distribution. Our model is general, and can be applied to any data that is
approximately Gaussian. For out of distribution detection in image datasets,
our approach provides a principled improvement over the state-of-the-art.
Specifically, we demonstrate the effectiveness of our method in ResNet and
DenseNet architectures trained on various image datasets. For example, on a
ResNet trained on CIFAR-100 and evaluated on detection of out-of-distribution
samples from the ImageNet dataset, holding the true positive rate (TPR) at
$95\%$, we improve the true negative rate (TNR) from $56.7\%$ (current
state-of-the-art) to $77.5\%$ (ours).
Related papers
- Robust Representation Consistency Model via Contrastive Denoising [83.47584074390842]
randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations.
diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples.
We reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space.
arXiv Detail & Related papers (2025-01-22T18:52:06Z) - Local Flow Matching Generative Models [19.859984725284896]
Local Flow Matching is a computational framework for density estimation based on flow-based generative models.
$textttLFM$ employs a simulation-free scheme and incrementally learns a sequence of Flow Matching sub-models.
We demonstrate the improved training efficiency and competitive generative performance of $textttLFM$ compared to FM.
arXiv Detail & Related papers (2024-10-03T14:53:10Z) - Integrating Amortized Inference with Diffusion Models for Learning Clean Distribution from Corrupted Images [19.957503854446735]
Diffusion models (DMs) have emerged as powerful generative models for solving inverse problems.
FlowDiff is a joint training paradigm that leverages a conditional normalizing flow model to facilitate the training of diffusion models on corrupted data sources.
Our experiment shows that FlowDiff can effectively learn clean distributions across a wide range of corrupted data sources.
arXiv Detail & Related papers (2024-07-15T18:33:20Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Boundary-aware Decoupled Flow Networks for Realistic Extreme Rescaling [49.215957313126324]
Recently developed generative methods, including invertible rescaling network (IRN) based and generative adversarial network (GAN) based methods, have demonstrated exceptional performance in image rescaling.
However, IRN-based methods tend to produce over-smoothed results, while GAN-based methods easily generate fake details.
We propose Boundary-aware Decoupled Flow Networks (BDFlow) to generate realistic and visually pleasing results.
arXiv Detail & Related papers (2024-05-05T14:05:33Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - GSURE-Based Diffusion Model Training with Corrupted Data [35.56267114494076]
We propose a novel training technique for generative diffusion models based only on corrupted data.
We demonstrate our technique on face images as well as Magnetic Resonance Imaging (MRI)
arXiv Detail & Related papers (2023-05-22T15:27:20Z) - DC4L: Distribution Shift Recovery via Data-Driven Control for Deep Learning Models [4.374569172244273]
We propose to use control for learned models to recover from distribution shifts online.
Our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set.
We show that our method generalizes to composites of shifts from the ImageNet-C benchmark, achieving improvements in average accuracy of up to 9.81%.
arXiv Detail & Related papers (2023-02-20T22:06:26Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Transfer Learning Gaussian Anomaly Detection by Fine-Tuning
Representations [3.5031508291335625]
catastrophic forgetting prevents the successful fine-tuning of pre-trained representations on new datasets.
We propose a new method to fine-tune learned representations for AD in a transfer learning setting.
We additionally propose to use augmentations commonly employed for vicinal risk in a validation scheme to detect onset of catastrophic forgetting.
arXiv Detail & Related papers (2021-08-09T15:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.