Interpretable Out-Of-Distribution Detection Using Pattern Identification
- URL: http://arxiv.org/abs/2302.10303v1
- Date: Tue, 24 Jan 2023 15:35:54 GMT
- Title: Interpretable Out-Of-Distribution Detection Using Pattern Identification
- Authors: Romain Xu-Darme (LSL, MRIM ), Julien Girard-Satabin (LSL), Darryl
Hond, Gabriele Incorvaia, Zakaria Chihani (LSL)
- Abstract summary: Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance.
Common approaches in the literature tend to train detectors requiring inside-of-distribution (in-distribution, or IoD) and OoD validation samples.
We propose to use existing work from the field of explainable AI, namely the PARTICUL pattern identification algorithm, in order to build more interpretable and robust OoD detectors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OoD) detection for data-based programs is a goal of
paramount importance. Common approaches in the literature tend to train
detectors requiring inside-of-distribution (in-distribution, or IoD) and OoD
validation samples, and/or implement confidence metrics that are often abstract
and therefore difficult to interpret. In this work, we propose to use existing
work from the field of explainable AI, namely the PARTICUL pattern
identification algorithm, in order to build more interpretable and robust OoD
detectors for visual classifiers. Crucially, this approach does not require to
retrain the classifier and is tuned directly to the IoD dataset, making it
applicable to domains where OoD does not have a clear definition. Moreover,
pattern identification allows us to provide images from the IoD dataset as
reference points to better explain the confidence scores. We demonstrates that
the detection capabilities of this approach are on par with existing methods
through an extensive benchmark across four datasets and two definitions of OoD.
In particular, we introduce a new benchmark based on perturbations of the IoD
dataset which provides a known and quantifiable evaluation of the discrepancy
between the IoD and OoD datasets that serves as a reference value for the
comparison between various OoD detection methods. Our experiments show that the
robustness of all metrics under test does not solely depend on the nature of
the IoD dataset or the OoD definition, but also on the architecture of the
classifier, which stresses the need for thorough experimentations for future
work on OoD detection.
Related papers
- On the Inherent Robustness of One-Stage Object Detection against Out-of-Distribution Data [6.7236795813629]
We propose a novel detection algorithm for detecting unknown objects in image data.
It exploits supervised dimensionality reduction techniques to mitigate the effects of the curse of dimensionality on the features extracted by the model.
It utilizes high-resolution feature maps to identify potential unknown objects in an unsupervised fashion.
arXiv Detail & Related papers (2024-11-07T10:15:25Z) - Beyond Perceptual Distances: Rethinking Disparity Assessment for Out-of-Distribution Detection with Diffusion Models [28.96695036746856]
Out-of-Distribution (OoD) detection aims to justify whether a given sample is from the training distribution of the classifier-under-protection.
DM-based methods bring fresh insights to the field, yet remain under-explored.
Our work has demonstrated state-of-the-art detection performances among DM-based methods in extensive experiments.
arXiv Detail & Related papers (2024-09-16T08:50:47Z) - Bayesian Detector Combination for Object Detection with Crowdsourced Annotations [49.43709660948812]
Acquiring fine-grained object detection annotations in unconstrained images is time-consuming, expensive, and prone to noise.
We propose a novel Bayesian Detector Combination (BDC) framework to more effectively train object detectors with noisy crowdsourced annotations.
BDC is model-agnostic, requires no prior knowledge of the annotators' skill level, and seamlessly integrates with existing object detection models.
arXiv Detail & Related papers (2024-07-10T18:00:54Z) - Contextualised Out-of-Distribution Detection using Pattern Identication [0.0]
CODE is an extension of existing work from the field of explainable AI.
It identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method.
arXiv Detail & Related papers (2023-10-24T07:55:09Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Raising the Bar on the Evaluation of Out-of-Distribution Detection [88.70479625837152]
We define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
We propose a GAN based framework for generating OoD samples from each of these 2 categories, given an iD dataset.
We show that a) state-of-the-art OoD detection methods which perform exceedingly well on conventional benchmarks are significantly less robust to our proposed benchmark.
arXiv Detail & Related papers (2022-09-24T08:48:36Z) - Learning by Erasing: Conditional Entropy based Transferable Out-Of-Distribution Detection [17.31471594748061]
Out-of-distribution (OOD) detection is essential to handle the distribution shifts between training and test scenarios.
Existing methods require retraining to capture the dataset-specific feature representation or data distribution.
We propose a deep generative models (DGM) based transferable OOD detection method, which is unnecessary to retrain on a new ID dataset.
arXiv Detail & Related papers (2022-04-23T10:19:58Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.