Feed Two Birds with One Scone: Exploiting Wild Data for Both
Out-of-Distribution Generalization and Detection
- URL: http://arxiv.org/abs/2306.09158v1
- Date: Thu, 15 Jun 2023 14:32:35 GMT
- Title: Feed Two Birds with One Scone: Exploiting Wild Data for Both
Out-of-Distribution Generalization and Detection
- Authors: Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak,
Yixuan Li
- Abstract summary: We propose a margin-based learning framework that exploits freely available unlabeled data in the wild.
We show both empirically and theoretically that the proposed margin constraint is the key to achieving both OOD generalization and detection.
- Score: 31.68755583314898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern machine learning models deployed in the wild can encounter both
covariate and semantic shifts, giving rise to the problems of
out-of-distribution (OOD) generalization and OOD detection respectively. While
both problems have received significant research attention lately, they have
been pursued independently. This may not be surprising, since the two tasks
have seemingly conflicting goals. This paper provides a new unified approach
that is capable of simultaneously generalizing to covariate shifts while
robustly detecting semantic shifts. We propose a margin-based learning
framework that exploits freely available unlabeled data in the wild that
captures the environmental test-time OOD distributions under both covariate and
semantic shifts. We show both empirically and theoretically that the proposed
margin constraint is the key to achieving both OOD generalization and
detection. Extensive experiments show the superiority of our framework,
outperforming competitive baselines that specialize in either OOD
generalization or OOD detection. Code is publicly available at
https://github.com/deeplearning-wisc/scone.
Related papers
- The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - AHA: Human-Assisted Out-of-Distribution Generalization and Detection [10.927973527794155]
This paper introduces a novel, integrated approach AHA (Adaptive Human-Assisted OOD learning)
It addresses both OOD generalization and detection through a human-assisted framework by labeling data in the wild.
Our method significantly outperforms existing state-of-the-art methods that do not involve human assistance.
arXiv Detail & Related papers (2024-10-10T14:57:11Z) - Bridging OOD Detection and Generalization: A Graph-Theoretic View [21.84304334604601]
We introduce a graph-theoretic framework to tackle both OOD generalization and detection problems.
By leveraging the graph formulation, data representations are obtained through the factorization of the graph's adjacency matrix.
Empirical results showcase competitive performance in comparison to existing methods.
arXiv Detail & Related papers (2024-09-26T18:35:51Z) - CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection [42.33618249731874]
We show that minimizing the magnitude of energy scores on training data leads to domain-consistent Hessians of classification loss.
We have developed a unified fine-tuning framework that allows for concurrent optimization of both tasks.
arXiv Detail & Related papers (2024-05-26T03:28:59Z) - Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A Survey [7.891552999555933]
Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples.
Traditionally, research has addressed OOD detection and adversarial robustness as separate challenges.
This survey focuses on the intersection of these two areas, examining how the research community has investigated them together.
arXiv Detail & Related papers (2024-04-08T06:27:38Z) - How Does Unlabeled Data Provably Help Out-of-Distribution Detection? [63.41681272937562]
Unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and out-of-distribution (OOD) data.
This paper introduces a new learning framework SAL (Separate And Learn) that offers both strong theoretical guarantees and empirical effectiveness.
arXiv Detail & Related papers (2024-02-05T20:36:33Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.