Dual Representation Learning for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2206.09387v2
- Date: Sun, 27 Aug 2023 07:31:30 GMT
- Title: Dual Representation Learning for Out-of-Distribution Detection
- Authors: Zhilin Zhao and Longbing Cao
- Abstract summary: To classify in-distribution samples, deep neural networks explore strongly label-related information and discard weakly label-related information.
In this paper, Dual Representation Learning (DRL) trains its auxiliary network exploring the remaining weakly label-related information to learn distribution-discriminative representations.
Experiments show that DRL outperforms the state-of-the-art methods for out-of-distribution detection.
- Score: 37.939239477868796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To classify in-distribution samples, deep neural networks explore strongly
label-related information and discard weakly label-related information
according to the information bottleneck. Out-of-distribution samples drawn from
distributions differing from that of in-distribution samples could be assigned
with unexpected high-confidence predictions because they could obtain minimum
strongly label-related information. To distinguish in- and out-of-distribution
samples, Dual Representation Learning (DRL) makes out-of-distribution samples
harder to have high-confidence predictions by exploring both strongly and
weakly label-related information from in-distribution samples. For a pretrained
network exploring strongly label-related information to learn
label-discriminative representations, DRL trains its auxiliary network
exploring the remaining weakly label-related information to learn
distribution-discriminative representations. Specifically, for a
label-discriminative representation, DRL constructs its complementary
distribution-discriminative representation by integrating diverse
representations less similar to the label-discriminative representation.
Accordingly, DRL combines label- and distribution-discriminative
representations to detect out-of-distribution samples. Experiments show that
DRL outperforms the state-of-the-art methods for out-of-distribution detection.
Related papers
- Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - Out-of-distribution Detection by Cross-class Vicinity Distribution of
In-distribution Data [36.66825830101456]
Deep neural networks for image classification only learn to map in-distribution inputs to their corresponding ground truth labels in training.
This results from the assumption that all samples are independent and identically distributed.
A textitCross-class Vicinity Distribution is introduced by assuming that an out-of-distribution sample generated by mixing multiple in-distribution samples does not share the same classes of its constituents.
arXiv Detail & Related papers (2022-06-19T12:03:33Z) - Gray Learning from Non-IID Data with Out-of-distribution Samples [45.788789553551176]
The integrity of training data, even when annotated by experts, is far from guaranteed.
We introduce a novel approach, termed textitGray Learning, which leverages both ground-truth and complementary labels.
By grounding our approach in statistical learning theory, we derive bounds for the generalization error, demonstrating that GL achieves tight constraints even in non-IID settings.
arXiv Detail & Related papers (2022-06-19T10:46:38Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - Multi-Class Data Description for Out-of-distribution Detection [25.853322158250435]
Deep-MCDD is effective to detect out-of-distribution (OOD) samples as well as classify in-distribution (ID) samples.
By integrating the concept of Gaussian discriminant analysis into deep neural networks, we propose a deep learning objective to learn class-conditional distributions.
arXiv Detail & Related papers (2021-04-02T08:41:51Z) - Capturing Label Distribution: A Case Study in NLI [19.869498599986006]
Post-hoc smoothing of the predicted label distribution to match the expected label entropy is very effective.
We introduce a small amount of examples with multiple references into training.
arXiv Detail & Related papers (2021-02-13T04:14:31Z) - Mining Label Distribution Drift in Unsupervised Domain Adaptation [78.2452946757045]
We propose Label distribution Matching Domain Adversarial Network (LMDAN) to handle data distribution shift and label distribution drift jointly.
Experiments show that LMDAN delivers superior performance under considerable label distribution drift.
arXiv Detail & Related papers (2020-06-16T23:41:42Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.