VRA: Variational Rectified Activation for Out-of-distribution Detection
- URL: http://arxiv.org/abs/2302.11716v4
- Date: Thu, 18 May 2023 02:05:22 GMT
- Title: VRA: Variational Rectified Activation for Out-of-distribution Detection
- Authors: Mingyu Xu, Zheng Lian, Bin Liu, Jianhua Tao
- Abstract summary: Out-of-distribution (OOD) detection is critical to building reliable machine learning systems in the open world.
ReAct is a typical and effective technique to deal with model overconfidence, which truncates high activations to increase the gap between in-distribution and OOD.
We propose a novel technique called Variational Rectified Activation (VRA)'', which simulates these suppression and amplification operations using piecewise functions.
- Score: 45.804178022641764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is critical to building reliable machine
learning systems in the open world. Researchers have proposed various
strategies to reduce model overconfidence on OOD data. Among them, ReAct is a
typical and effective technique to deal with model overconfidence, which
truncates high activations to increase the gap between in-distribution and OOD.
Despite its promising results, is this technique the best choice for widening
the gap? To answer this question, we leverage the variational method to find
the optimal operation and verify the necessity of suppressing abnormally low
and high activations and amplifying intermediate activations in OOD detection,
rather than focusing only on high activations like ReAct. This motivates us to
propose a novel technique called ``Variational Rectified Activation (VRA)'',
which simulates these suppression and amplification operations using piecewise
functions. Experimental results on multiple benchmark datasets demonstrate that
our method outperforms existing post-hoc strategies. Meanwhile, VRA is
compatible with different scoring functions and network architectures.
\textcolor[rgb]{0.93,0.0,0.47}{Our code can be found in Supplementary
Material}.
Related papers
- ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation [5.590633742488972]
Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks.
We propose SeTAR, a training-free OOD detection method.
SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm.
Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.
arXiv Detail & Related papers (2024-06-18T13:55:13Z) - Mitigating Overconfidence in Out-of-Distribution Detection by Capturing Extreme Activations [1.8531577178922987]
"Overconfidence" is an intrinsic property of certain neural network architectures, leading to poor OOD detection.
We measure extreme activation values in the penultimate layer of neural networks and then leverage this proxy of overconfidence to improve on several OOD detection baselines.
Compared to the baselines, our method often grants substantial improvements, with double-digit increases in OOD detection.
arXiv Detail & Related papers (2024-05-21T10:14:50Z) - Advancing Out-of-Distribution Detection through Data Purification and
Dynamic Activation Function Design [12.45245390882047]
We introduce OOD-R (Out-of-Distribution-Rectified), a meticulously curated collection of open-source datasets with enhanced noise reduction properties.
OOD-R incorporates noise filtering technologies to refine the datasets, ensuring a more accurate and reliable evaluation of OOD detection algorithms.
We present ActFun, an innovative method that fine-tunes the model's response to diverse inputs, thereby improving the stability of feature extraction.
arXiv Detail & Related papers (2024-03-06T02:39:22Z) - MOODv2: Masked Image Modeling for Out-of-Distribution Detection [57.17163962383442]
This study explores distinct pretraining tasks and employing various OOD score functions.
Our framework, MOODv2, impressively enhances 14.30% AUROC to 95.68% on ImageNet and achieves 99.98% on CIFAR-10.
arXiv Detail & Related papers (2024-01-05T02:57:58Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - Enhancing the Generalization for Intent Classification and Out-of-Domain
Detection in SLU [70.44344060176952]
Intent classification is a major task in spoken language understanding (SLU)
Recent works have shown that using extra data and labels can improve the OOD detection performance.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
arXiv Detail & Related papers (2021-06-28T08:27:38Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.