Understanding Open-Set Recognition by Jacobian Norm and Inter-Class
Separation
- URL: http://arxiv.org/abs/2209.11436v2
- Date: Fri, 29 Sep 2023 15:06:13 GMT
- Title: Understanding Open-Set Recognition by Jacobian Norm and Inter-Class
Separation
- Authors: Jaewoo Park, Hojin Park, Eunju Jeong, Andrew Beng Jin Teoh
- Abstract summary: We study the relationship between the Jacobian norm of representations and the inter/intra-class learning dynamics.
We devise a marginal one-vs-rest (m-OvR) loss function that promotes strong inter-class separation.
- Score: 16.40441221109391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The findings on open-set recognition (OSR) show that models trained on
classification datasets are capable of detecting unknown classes not
encountered during the training process. Specifically, after training, the
learned representations of known classes dissociate from the representations of
the unknown class, facilitating OSR. In this paper, we investigate this
emergent phenomenon by examining the relationship between the Jacobian norm of
representations and the inter/intra-class learning dynamics. We provide a
theoretical analysis, demonstrating that intra-class learning reduces the
Jacobian norm for known class samples, while inter-class learning increases the
Jacobian norm for unknown samples, even in the absence of direct exposure to
any unknown sample. Overall, the discrepancy in the Jacobian norm between the
known and unknown classes enables OSR. Based on this insight, which highlights
the pivotal role of inter-class learning, we devise a marginal one-vs-rest
(m-OvR) loss function that promotes strong inter-class separation. To further
improve OSR performance, we integrate the m-OvR loss with additional strategies
that maximize the Jacobian norm disparity. We present comprehensive
experimental results that support our theoretical observations and demonstrate
the efficacy of our proposed OSR approach.
Related papers
- Exploiting Structure in Offline Multi-Agent RL: The Benefits of Low Interaction Rank [52.831993899183416]
We introduce a structural assumption -- the interaction rank -- and establish that functions with low interaction rank are significantly more robust to distribution shift compared to general ones.
We demonstrate that utilizing function classes with low interaction rank, when combined with regularization and no-regret learning, admits decentralized, computationally and statistically efficient learning in offline MARL.
arXiv Detail & Related papers (2024-10-01T22:16:22Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - LORD: Leveraging Open-Set Recognition with Unknown Data [10.200937444995944]
LORD is a framework to Leverage Open-set Recognition by exploiting unknown data.
We identify three model-agnostic training strategies that exploit background data and applied them to well-established classifiers.
arXiv Detail & Related papers (2023-08-24T06:12:41Z) - Cluster-aware Semi-supervised Learning: Relational Knowledge
Distillation Provably Learns Clustering [15.678104431835772]
We take an initial step toward a theoretical understanding of relational knowledge distillation (RKD)
For semi-supervised learning, we demonstrate the label efficiency of RKD through a general framework of cluster-aware learning.
We show that despite the common effect of learning accurate clusterings, RKD facilitates a "global" perspective.
arXiv Detail & Related papers (2023-07-20T17:05:51Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - A Survey on Open Set Recognition [0.0]
Open Set Recognition (OSR) is about dealing with unknown situations that were not learned by the models during training.
In this paper, we provide a survey of existing works about OSR and distinguish their respective advantages and disadvantages.
It is concluded that OSR can appropriately deal with unknown instances in the real-world where capturing all possible classes in the training data is not practical.
arXiv Detail & Related papers (2021-08-18T16:40:03Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Adversarial Reciprocal Points Learning for Open Set Recognition [21.963137599375862]
Open set recognition (OSR) aims to simultaneously classify the seen classes and identify the unseen classes as 'unknown'
We formulate the open space risk problem from the perspective of multi-class integration.
A novel learning framework, termed Adrial Reciprocal Point Learning (ARPL), is proposed to minimize the overlap of known distribution and unknown distributions.
arXiv Detail & Related papers (2021-03-01T12:25:45Z) - Open-Set Recognition with Gaussian Mixture Variational Autoencoders [91.3247063132127]
In inference, open-set classification is to either classify a sample into a known class from training or reject it as an unknown class.
We train our model to cooperatively learn reconstruction and perform class-based clustering in the latent space.
Our model achieves more accurate and robust open-set classification results, with an average F1 improvement of 29.5%.
arXiv Detail & Related papers (2020-06-03T01:15:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.