On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
- URL: http://arxiv.org/abs/2310.16492v1
- Date: Wed, 25 Oct 2023 09:19:45 GMT
- Title: On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
- Authors: Sangha Park, Jisoo Mok, Dahuin Jung, Saehyung Lee, Sungroh Yoon
- Abstract summary: Outlier exposure introduces an additional loss that encourages low-confidence predictions on OoD data during training.
This paper explores the benefits of using textual outliers by replacing real or virtual outliers in the image-domain with textual equivalents.
Our experiments demonstrate that generated textual outliers achieve competitive performance on large-scale OoD and hard OoD benchmarks.
- Score: 41.277221429527515
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Successful detection of Out-of-Distribution (OoD) data is becoming
increasingly important to ensure safe deployment of neural networks. One of the
main challenges in OoD detection is that neural networks output overconfident
predictions on OoD data, make it difficult to determine OoD-ness of data solely
based on their predictions. Outlier exposure addresses this issue by
introducing an additional loss that encourages low-confidence predictions on
OoD data during training. While outlier exposure has shown promising potential
in improving OoD detection performance, all previous studies on outlier
exposure have been limited to utilizing visual outliers. Drawing inspiration
from the recent advancements in vision-language pre-training, this paper
venture out to the uncharted territory of textual outlier exposure. First, we
uncover the benefits of using textual outliers by replacing real or virtual
outliers in the image-domain with textual equivalents. Then, we propose various
ways of generating preferable textual outliers. Our extensive experiments
demonstrate that generated textual outliers achieve competitive performance on
large-scale OoD and hard OoD benchmarks. Furthermore, we conduct empirical
analyses of textual outliers to provide primary criteria for designing
advantageous textual outliers: near-distribution, descriptiveness, and
inclusion of visual semantics.
Related papers
- Improving Harmful Text Detection with Joint Retrieval and External Knowledge [16.68620974551506]
This study proposes a joint retrieval framework that integrates pre-trained language models with knowledge graphs to improve the accuracy and robustness of harmful text detection.
Experimental results demonstrate that the joint retrieval approach significantly outperforms single-model baselines.
arXiv Detail & Related papers (2025-04-03T06:37:55Z) - RODEO: Robust Outlier Detection via Exposing Adaptive Out-of-Distribution Samples [4.76428036044684]
We introduce RODEO, a data-centric approach that generates effective outliers for robust outlier detection.
We show that incorporating outlier exposure (OE) and adversarial training can be an effective strategy for this purpose.
We demonstrate both quantitatively and qualitatively that our adaptive OE method effectively generates diverse'' and near-distribution'' outliers.
arXiv Detail & Related papers (2025-01-28T14:13:17Z) - Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Forward-Forward Learning achieves Highly Selective Latent Representations for Out-of-Distribution Detection in Fully Spiking Neural Networks [6.7236795813629]
Spiking Neural Networks (SNNs), inspired by biological systems, offer a promising avenue for overcoming limitations.
In this work, we explore the potential of the spiking Forward-Forward Algorithm (FFA) to address these challenges.
We propose a novel, gradient-free attribution method to detect features that drive a sample away from class distributions.
arXiv Detail & Related papers (2024-07-19T08:08:17Z) - Enhancing Adverse Drug Event Detection with Multimodal Dataset: Corpus Creation and Model Development [12.258245804049114]
The mining of adverse drug events (ADEs) is pivotal in pharmacovigilance, enhancing patient safety.
Traditional ADE detection methods are reliable but slow, not easily adaptable to large-scale operations.
Previous ADE mining studies have focused on text-based methodologies, overlooking visual cues.
We present a MultiModal Adverse Drug Event (MMADE) detection dataset, merging ADE-related textual information with visual aids.
arXiv Detail & Related papers (2024-05-24T17:58:42Z) - Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance [44.1830188215271]
Unsupervised domain adaptation (UDA) tries to overcome the tedious work of labeling data by leveraging a labeled source dataset.
Current vision-language models exhibit remarkable zero-shot prediction capabilities.
We introduce a strong-weak guidance learning scheme that employs zero-shot predictions to help align the source and target dataset.
arXiv Detail & Related papers (2023-12-07T06:16:39Z) - Towards Robust and Accurate Visual Prompting [11.918195429308035]
We study whether a visual prompt derived from a robust model can inherit the robustness while suffering from the generalization performance decline.
We introduce a novel technique named Prompt Boundary Loose (PBL) to effectively mitigates the suboptimal results of visual prompt on standard accuracy.
Our findings are universal and demonstrate the significant benefits of our proposed method.
arXiv Detail & Related papers (2023-11-18T07:00:56Z) - Diversified Outlier Exposure for Out-of-Distribution Detection via
Informative Extrapolation [110.34982764201689]
Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications.
Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers.
We propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers.
arXiv Detail & Related papers (2023-10-21T07:16:09Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - VOS: Learning What You Don't Know by Virtual Outlier Synthesis [23.67449949146439]
Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks.
Previous approaches rely on real outlier datasets for model regularization.
We present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers.
arXiv Detail & Related papers (2022-02-02T18:43:01Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.