Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation
- URL: http://arxiv.org/abs/2212.08355v1
- Date: Fri, 16 Dec 2022 09:01:57 GMT
- Title: Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation
- Authors: Sungsu Hur, Inkyu Shin, Kwanyong Park, Sanghyun Woo, In So Kweon
- Abstract summary: Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
- Score: 79.62038105814658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Universal Domain Adaptation aims to transfer the knowledge between the
datasets by handling two shifts: domain-shift and category-shift. The main
challenge is correctly distinguishing the unknown target samples while adapting
the distribution of known class knowledge from source to target. Most existing
methods approach this problem by first training the target adapted known
classifier and then relying on the single threshold to distinguish unknown
target samples. However, this simple threshold-based approach prevents the
model from considering the underlying complexities existing between the known
and unknown samples in the high-dimensional feature space. In this paper, we
propose a new approach in which we use two sets of feature points, namely dual
Classifiers for Prototypes and Reciprocals (CPR). Our key idea is to associate
each prototype with corresponding known class features while pushing the
reciprocals apart from these prototypes to locate them in the potential unknown
feature space. The target samples are then classified as unknown if they fall
near any reciprocals at test time. To successfully train our framework, we
collect the partial, confident target samples that are classified as known or
unknown through on our proposed multi-criteria selection. We then additionally
apply the entropy loss regularization to them. For further adaptation, we also
apply standard consistency regularization that matches the predictions of two
different views of the input to make more compact target feature space. We
evaluate our proposal, CPR, on three standard benchmarks and achieve comparable
or new state-of-the-art results. We also provide extensive ablation experiments
to verify our main design choices in our framework.
Related papers
- FedHide: Federated Learning by Hiding in the Neighbors [12.71494268219787]
We propose a prototype-based federated learning method designed for embedding networks in classification or verification tasks.
Our approach generates proxy class prototypes by linearly combining them with their nearest neighbors.
This technique conceals the true class prototype while enabling clients to learn discriminative embedding networks.
arXiv Detail & Related papers (2024-09-12T07:37:49Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Exploiting Inter-Sample Affinity for Knowability-Aware Universal Domain
Adaptation [34.5943374866644]
Universal domain adaptation (UniDA) aims to transfer the knowledge of common classes from the source domain to the target domain without any prior knowledge on the label set.
Recent methods usually focused on categorizing a target sample into one of the source classes rather than distinguishing known and unknown samples.
We propose a novel UDA framework where such inter-sample affinity is exploited.
arXiv Detail & Related papers (2022-07-19T13:49:30Z) - Conditional Variational Capsule Network for Open Set Recognition [64.18600886936557]
In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
arXiv Detail & Related papers (2021-04-19T09:39:30Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.