RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
- URL: http://arxiv.org/abs/2209.08590v1
- Date: Sun, 18 Sep 2022 16:01:31 GMT
- Title: RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
- Authors: Yue Song, Nicu Sebe, Wei Wang
- Abstract summary: textttRankFeat is a simple yet effective textttpost hoc approach for OOD detection.
textttRankFeat achieves the emphstate-of-the-art performance and reduces the average false positive rate (FPR95) by 17.90% compared with the previous best method.
- Score: 65.67315418971688
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The task of out-of-distribution (OOD) detection is crucial for deploying
machine learning models in real-world settings. In this paper, we observe that
the singular value distributions of the in-distribution (ID) and OOD features
are quite different: the OOD feature matrix tends to have a larger dominant
singular value than the ID feature, and the class predictions of OOD samples
are largely determined by it. This observation motivates us to propose
\texttt{RankFeat}, a simple yet effective \texttt{post hoc} approach for OOD
detection by removing the rank-1 matrix composed of the largest singular value
and the associated singular vectors from the high-level feature (\emph{i.e.,}
$\mathbf{X}{-} \mathbf{s}_{1}\mathbf{u}_{1}\mathbf{v}_{1}^{T}$).
\texttt{RankFeat} achieves the \emph{state-of-the-art} performance and reduces
the average false positive rate (FPR95) by 17.90\% compared with the previous
best method. Extensive ablation studies and comprehensive theoretical analyses
are presented to support the empirical results.
Related papers
- ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection [41.41164637577005]
Post-hoc out-of-distribution (OOD) detection has garnered intensive attention in reliable machine learning.
We propose a novel theoretical framework grounded in Bregman divergence to provide a unified perspective on density-based score design.
We show that our proposed textscConjNorm has established a new state-of-the-art in a variety of OOD detection setups.
arXiv Detail & Related papers (2024-02-27T21:02:47Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - RankFeat&RankWeight: Rank-1 Feature/Weight Removal for
Out-of-distribution Detection [74.48870221803242]
textttRankFeat achieves emphstate-of-the-art performance and reduces the average false positive rate (FPR95) by 17.90%.
We propose textttRankWeight which removes the rank-1 weight from the parameter matrices of a single deep layer.
arXiv Detail & Related papers (2023-11-23T12:17:45Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection [12.341250124228859]
We propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets.
Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space.
We empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
arXiv Detail & Related papers (2022-04-06T03:05:58Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.