Towards Effective Semantic OOD Detection in Unseen Domains: A Domain
Generalization Perspective
- URL: http://arxiv.org/abs/2309.10209v1
- Date: Mon, 18 Sep 2023 23:48:22 GMT
- Title: Towards Effective Semantic OOD Detection in Unseen Domains: A Domain
Generalization Perspective
- Authors: Haoliang Wang, Chen Zhao, Yunhui Guo, Kai Jiang, Feng Chen
- Abstract summary: Two prevalent types of distributional shifts in machine learning are the covariate shift and the semantic shift.
Traditional OOD detection techniques typically address only one of these shifts.
We introduce a novel problem, semantic OOD detection across domains, which simultaneously addresses both shifts.
- Score: 19.175929188731715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two prevalent types of distributional shifts in machine learning are the
covariate shift (as observed across different domains) and the semantic shift
(as seen across different classes). Traditional OOD detection techniques
typically address only one of these shifts. However, real-world testing
environments often present a combination of both covariate and semantic shifts.
In this study, we introduce a novel problem, semantic OOD detection across
domains, which simultaneously addresses both distributional shifts. To this
end, we introduce two regularization strategies: domain generalization
regularization, which ensures semantic invariance across domains to counteract
the covariate shift, and OOD detection regularization, designed to enhance OOD
detection capabilities against the semantic shift through energy bounding.
Through rigorous testing on three standard domain generalization benchmarks,
our proposed framework showcases its superiority over conventional domain
generalization approaches in terms of OOD detection performance. Moreover, it
holds its ground by maintaining comparable InD classification accuracy.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - MADOD: Generalizing OOD Detection to Unseen Domains via G-Invariance Meta-Learning [10.38552112657656]
We introduce Meta-learned Across Domain Out-of-distribution Detection (MADOD), a novel framework designed to address both shifts concurrently.
Our key innovation lies in task construction: we randomly designate in-distribution classes as pseudo-OODs within each meta-learning task.
Experiments on real-world and synthetic datasets demonstrate MADOD's superior performance in semantic OOD detection across unseen domains.
arXiv Detail & Related papers (2024-11-02T17:46:23Z) - The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection [42.33618249731874]
We show that minimizing the magnitude of energy scores on training data leads to domain-consistent Hessians of classification loss.
We have developed a unified fine-tuning framework that allows for concurrent optimization of both tasks.
arXiv Detail & Related papers (2024-05-26T03:28:59Z) - ATTA: Anomaly-aware Test-Time Adaptation for Out-of-Distribution
Detection in Segmentation [22.084967085509387]
We propose a dual-level OOD detection framework to handle domain shift and semantic shift jointly.
The first level distinguishes whether domain shift exists in the image by leveraging global low-level features.
The second level identifies pixels with semantic shift by utilizing dense high-level feature maps.
arXiv Detail & Related papers (2023-09-12T06:49:56Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Feed Two Birds with One Scone: Exploiting Wild Data for Both
Out-of-Distribution Generalization and Detection [31.68755583314898]
We propose a margin-based learning framework that exploits freely available unlabeled data in the wild.
We show both empirically and theoretically that the proposed margin constraint is the key to achieving both OOD generalization and detection.
arXiv Detail & Related papers (2023-06-15T14:32:35Z) - OpenOOD: Benchmarking Generalized Out-of-Distribution Detection [60.13300701826931]
Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications.
The field currently lacks a unified, strictly formulated, and comprehensive benchmark.
We build a unified, well-structured called OpenOOD, which implements over 30 methods developed in relevant fields.
arXiv Detail & Related papers (2022-10-13T17:59:57Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.