Upcycling Models under Domain and Category Shift
- URL: http://arxiv.org/abs/2303.07110v1
- Date: Mon, 13 Mar 2023 13:44:04 GMT
- Title: Upcycling Models under Domain and Category Shift
- Authors: Sanqing Qu, Tianpei Zou, Florian Roehrbein, Cewu Lu, Guang Chen,
Dacheng Tao, Changjun Jiang
- Abstract summary: We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
- Score: 95.22147885947732
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) often perform poorly in the presence of domain
shift and category shift. How to upcycle DNNs and adapt them to the target task
remains an important open problem. Unsupervised Domain Adaptation (UDA),
especially recently proposed Source-free Domain Adaptation (SFDA), has become a
promising technology to address this issue. Nevertheless, existing SFDA methods
require that the source domain and target domain share the same label space,
consequently being only applicable to the vanilla closed-set setting. In this
paper, we take one step further and explore the Source-free Universal Domain
Adaptation (SF-UniDA). The goal is to identify "known" data samples under both
domain and category shift, and reject those "unknown" data samples (not present
in source classes), with only the knowledge from standard pre-trained source
model. To this end, we introduce an innovative global and local clustering
learning technique (GLC). Specifically, we design a novel, adaptive one-vs-all
global clustering algorithm to achieve the distinction across different target
classes and introduce a local k-NN clustering strategy to alleviate negative
transfer. We examine the superiority of our GLC on multiple benchmarks with
different category shift scenarios, including partial-set, open-set, and
open-partial-set DA. Remarkably, in the most challenging open-partial-set DA
scenario, GLC outperforms UMAD by 14.8\% on the VisDA benchmark. The code is
available at https://github.com/ispc-lab/GLC.
Related papers
- Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
In practice, domain shifts are likely to occur between training and test data, necessitating domain adaptation (DA) to adjust the pre-trained source model to the target domain.
UniDA has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach achieves state-of-the-art results in all experiments on the DomainNet and Office-Home datasets.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - GLC++: Source-Free Universal Domain Adaptation through Global-Local Clustering and Contrastive Affinity Learning [84.54244771470012]
Source-Free Universal Domain Adaptation (SF-UniDA) aims to accurately classify "known" data belonging to common categories.
We propose a novel Global and Local Clustering (GLC) technique, which comprises an adaptive one-vs-all global clustering algorithm.
We evolve GLC to GLC++, integrating a contrastive affinity learning strategy.
arXiv Detail & Related papers (2024-03-21T13:57:45Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.