Topology-Matching Normalizing Flows for Out-of-Distribution Detection in
Robot Learning
- URL: http://arxiv.org/abs/2311.06481v1
- Date: Sat, 11 Nov 2023 05:09:31 GMT
- Title: Topology-Matching Normalizing Flows for Out-of-Distribution Detection in
Robot Learning
- Authors: Jianxiang Feng, Jongseok Lee, Simon Geisler, Stephan Gunnemann,
Rudolph Triebel
- Abstract summary: A powerful approach for Out-of-Distribution (OOD) detection is based on density estimation with Normalizing Flows (NFs)
In this work, we circumvent this topological mismatch using an expressive class-conditional base distribution trained with an information-theoretic objective to match the required topology.
We demonstrate superior results in density estimation and 2D object detection benchmarks in comparison with extensive baselines.
- Score: 38.97407602443256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To facilitate reliable deployments of autonomous robots in the real world,
Out-of-Distribution (OOD) detection capabilities are often required. A powerful
approach for OOD detection is based on density estimation with Normalizing
Flows (NFs). However, we find that prior work with NFs attempts to match the
complex target distribution topologically with naive base distributions leading
to adverse implications. In this work, we circumvent this topological mismatch
using an expressive class-conditional base distribution trained with an
information-theoretic objective to match the required topology. The proposed
method enjoys the merits of wide compatibility with existing learned models
without any performance degradation and minimum computation overhead while
enhancing OOD detection capabilities. We demonstrate superior results in
density estimation and 2D object detection benchmarks in comparison with
extensive baselines. Moreover, we showcase the applicability of the method with
a real-robot deployment.
Related papers
- OAL: Enhancing OOD Detection Using Latent Diffusion [5.357756138014614]
Outlier Aware Learning (OAL) framework synthesizes OOD training data directly in the latent space.
We introduce a mutual information-based contrastive learning approach that amplifies the distinction between In-Distribution (ID) and collected OOD features.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - Feature Density Estimation for Out-of-Distribution Detection via Normalizing Flows [7.91363551513361]
Out-of-distribution (OOD) detection is a critical task for safe deployment of learning systems in the open world setting.
We present a fully unsupervised approach which requires no exposure to OOD data, avoiding researcher bias in OOD sample selection.
This is a post-hoc method which can be applied to any pretrained model, and involves training a lightweight auxiliary normalizing flow model to perform the out-of-distribution detection via density thresholding.
arXiv Detail & Related papers (2024-02-09T16:51:01Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - How to Exploit Hyperspherical Embeddings for Out-of-Distribution
Detection? [22.519572587827213]
CIDER is a representation learning framework that exploits hyperspherical embeddings for OOD detection.
CIDER establishes superior performance, outperforming the latest rival by 19.36% in FPR95.
arXiv Detail & Related papers (2022-03-08T23:44:01Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.