On Out-of-Distribution Detection for Audio with Deep Nearest Neighbors
- URL: http://arxiv.org/abs/2210.15283v1
- Date: Thu, 27 Oct 2022 09:35:33 GMT
- Title: On Out-of-Distribution Detection for Audio with Deep Nearest Neighbors
- Authors: Zaharah Bukhsh, Aaqib Saeed
- Abstract summary: Out-of-distribution (OOD) detection is concerned with identifying data points that do not belong to the same distribution as the model's training data.
We show that this simple and flexible method effectively detects OOD inputs across a broad category of audio (and speech) datasets.
- Score: 3.591566487849146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is concerned with identifying data points
that do not belong to the same distribution as the model's training data. For
the safe deployment of predictive models in a real-world environment, it is
critical to avoid making confident predictions on OOD inputs as it can lead to
potentially dangerous consequences. However, OOD detection largely remains an
under-explored area in the audio (and speech) domain. This is despite the fact
that audio is a central modality for many tasks, such as speaker diarization,
automatic speech recognition, and sound event detection. To address this, we
propose to leverage feature-space of the model with deep k-nearest neighbors to
detect OOD samples. We show that this simple and flexible method effectively
detects OOD inputs across a broad category of audio (and speech) datasets.
Specifically, it improves the false positive rate (FPR@TPR95) by 17% and the
AUROC score by 7% than other prior techniques.
Related papers
- FADEL: Uncertainty-aware Fake Audio Detection with Evidential Deep Learning [9.960675988638805]
We propose a novel framework called fake audio detection with evidential learning (FADEL)
FADEL incorporates model uncertainty into its predictions, thereby leading to more robust performance in OOD scenarios.
We demonstrate the validity of uncertainty estimation by analyzing a strong correlation between average uncertainty and equal error rate (EER) across different spoofing algorithms.
arXiv Detail & Related papers (2025-04-22T07:40:35Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio
Detection [54.20974251478516]
We propose a continual learning algorithm for fake audio detection to overcome catastrophic forgetting.
When fine-tuning a detection network, our approach adaptively computes the direction of weight modification according to the ratio of genuine utterances and fake utterances.
Our method can easily be generalized to related fields, like speech emotion recognition.
arXiv Detail & Related papers (2023-08-07T05:05:49Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for
Out-of-Domain Detection [28.810524375810736]
Out-of-distribution (OOD) detection is a critical task for reliable predictions over text.
Fine-tuning with pre-trained language models has been a de facto procedure to derive OOD detectors.
We show that using distance-based detection methods, pre-trained language models are near-perfect OOD detectors when the distribution shift involves a domain change.
arXiv Detail & Related papers (2023-05-22T17:42:44Z) - On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution
Detection [7.221206118679026]
The ability to detect Out-of-Distribution (OOD) data is important in safety-critical applications of deep learning.
An existing intuition in the literature is that the diversity of Deep Ensemble predictions indicates distributional shift.
We show experimentally that this intuition is not valid on ImageNet-scale OOD detection.
arXiv Detail & Related papers (2022-07-15T15:02:38Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - Metric Learning and Adaptive Boundary for Out-of-Domain Detection [0.9236074230806579]
We have designed an OOD detection algorithm independent of OOD data.
Our algorithm is based on a simple but efficient approach of combining metric learning with adaptive decision boundary.
Compared to other algorithms, we have found that our proposed algorithm has significantly improved OOD performance in a scenario with a lower number of classes.
arXiv Detail & Related papers (2022-04-22T17:54:55Z) - Out-of-distribution Detection with Deep Nearest Neighbors [33.71627349163909]
Out-of-distribution (OOD) detection is a critical task for deploying machine learning models in the open world.
In this paper, we explore the efficacy of non-parametric nearest-neighbor distance for OOD detection.
We demonstrate the effectiveness of nearest-neighbor-based OOD detection on several benchmarks and establish superior performance.
arXiv Detail & Related papers (2022-04-13T16:45:21Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.