Harnessing Out-Of-Distribution Examples via Augmenting Content and Style
- URL: http://arxiv.org/abs/2207.03162v2
- Date: Sat, 8 Apr 2023 02:06:26 GMT
- Title: Harnessing Out-Of-Distribution Examples via Augmenting Content and Style
- Authors: Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong,
Tongliang Liu
- Abstract summary: Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples.
This paper proposes a HOOD method that can leverage the content and style from each image instance to identify benign and malign OOD data.
Thanks to the proposed novel disentanglement and data augmentation techniques, HOOD can effectively deal with OOD examples in unknown and open environments.
- Score: 93.21258201360484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples,
and such a problem has drawn much attention. However, current methods lack a
full understanding of different types of OOD data: there are benign OOD data
that can be properly adapted to enhance the learning performance, while other
malign OOD data would severely degenerate the classification result. To Harness
OOD data, this paper proposes a HOOD method that can leverage the content and
style from each image instance to identify benign and malign OOD data.
Particularly, we design a variational inference framework to causally
disentangle content and style features by constructing a structural causal
model. Subsequently, we augment the content and style through an intervention
process to produce malign and benign OOD data, respectively. The benign OOD
data contain novel styles but hold our interested contents, and they can be
leveraged to help train a style-invariant model. In contrast, the malign OOD
data inherit unknown contents but carry familiar styles, by detecting them can
improve model robustness against deceiving anomalies. Thanks to the proposed
novel disentanglement and data augmentation techniques, HOOD can effectively
deal with OOD examples in unknown and open environments, whose effectiveness is
empirically validated in three typical OOD applications including OOD
detection, open-set semi-supervised learning, and open-set domain adaptation.
Related papers
- Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Out-of-Distribution Learning with Human Feedback [26.398598663165636]
This paper presents a novel framework for OOD learning with human feedback.
Our framework capitalizes on the freely available unlabeled data in the wild.
By exploiting human feedback, we enhance the robustness and reliability of machine learning models.
arXiv Detail & Related papers (2024-08-14T18:49:27Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - Class Relevance Learning For Out-of-distribution Detection [16.029229052068]
This paper presents an innovative class relevance learning method tailored for OOD detection.
Our method establishes a comprehensive class relevance learning framework, strategically harnessing interclass relationships within the OOD pipeline.
arXiv Detail & Related papers (2023-09-21T08:38:21Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Models Out of Line: A Fourier Lens on Distribution Shift Robustness [29.12208822285158]
Improving accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications.
Recently, some promising approaches have been developed to improve OOD robustness.
There still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
arXiv Detail & Related papers (2022-07-08T18:05:58Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.