Instance-level Heterogeneous Domain Adaptation for Limited-labeled
Sketch-to-Photo Retrieval
- URL: http://arxiv.org/abs/2211.14515v1
- Date: Sat, 26 Nov 2022 08:50:08 GMT
- Title: Instance-level Heterogeneous Domain Adaptation for Limited-labeled
Sketch-to-Photo Retrieval
- Authors: Fan Yang, Yang Wu, Zheng Wang, Xiang Li, Sakriani Sakti, Satoshi
Nakamura
- Abstract summary: We propose an Instance-level Heterogeneous Domain Adaptation (IHDA) framework.
We apply the fine-tuning strategy for identity label learning, aiming to transfer the instance-level knowledge in an inductive transfer manner.
Experiments show that our method has set a new state of the art on three sketch-to-photo image retrieval benchmarks without extra annotations.
- Score: 36.32367182571164
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Although sketch-to-photo retrieval has a wide range of applications, it is
costly to obtain paired and rich-labeled ground truth. Differently, photo
retrieval data is easier to acquire. Therefore, previous works pre-train their
models on rich-labeled photo retrieval data (i.e., source domain) and then
fine-tune them on the limited-labeled sketch-to-photo retrieval data (i.e.,
target domain). However, without co-training source and target data, source
domain knowledge might be forgotten during the fine-tuning process, while
simply co-training them may cause negative transfer due to domain gaps.
Moreover, identity label spaces of source data and target data are generally
disjoint and therefore conventional category-level Domain Adaptation (DA) is
not directly applicable. To address these issues, we propose an Instance-level
Heterogeneous Domain Adaptation (IHDA) framework. We apply the fine-tuning
strategy for identity label learning, aiming to transfer the instance-level
knowledge in an inductive transfer manner. Meanwhile, labeled attributes from
the source data are selected to form a shared label space for source and target
domains. Guided by shared attributes, DA is utilized to bridge cross-dataset
domain gaps and heterogeneous domain gaps, which transfers instance-level
knowledge in a transductive transfer manner. Experiments show that our method
has set a new state of the art on three sketch-to-photo image retrieval
benchmarks without extra annotations, which opens the door to train more
effective models on limited-labeled heterogeneous image retrieval tasks.
Related codes are available at \url{https://github.com/fandulu/IHDA.
Related papers
- FPL+: Filtered Pseudo Label-based Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation [14.925162565630185]
We propose an enhanced Filtered Pseudo Label (FPL+)-based Unsupervised Domain Adaptation (UDA) method for 3D medical image segmentation.
It first uses cross-domain data augmentation to translate labeled images in the source domain to a dual-domain training set consisting of a pseudo source-domain set and a pseudo target-domain set.
We then combine labeled source-domain images and target-domain images with pseudo labels to train a final segmentor, where image-level weighting based on uncertainty estimation and pixel-level weighting based on dual-domain consensus are proposed to mitigate the adverse effect of noisy pseudo
arXiv Detail & Related papers (2024-04-07T14:21:37Z) - Adapt Anything: Tailor Any Image Classifiers across Domains And
Categories Using Text-to-Image Diffusion Models [82.95591765009105]
We aim to study if a modern text-to-image diffusion model can tailor any task-adaptive image classifier across domains and categories.
We utilize only one off-the-shelf text-to-image model to synthesize images with category labels derived from the corresponding text prompts.
arXiv Detail & Related papers (2023-10-25T11:58:14Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - One-shot Unsupervised Domain Adaptation with Personalized Diffusion
Models [15.590759602379517]
Adapting a segmentation model from a labeled source domain to a target domain is one of the most challenging problems in domain adaptation.
We leverage text-to-image diffusion models to generate a synthetic target dataset with photo-realistic images.
Experiments show that our method surpasses the state-of-the-art OSUDA methods by up to +7.1%.
arXiv Detail & Related papers (2023-03-31T14:16:38Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Consistency Regularization with High-dimensional Non-adversarial
Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation [15.428323201750144]
BiSIDA employs consistency regularization to efficiently exploit information from the unlabeled target dataset.
BiSIDA achieves new state-of-the-art on two commonly-used synthetic-to-real domain adaptation benchmarks.
arXiv Detail & Related papers (2020-09-18T03:26:44Z) - Source Free Domain Adaptation with Image Translation [33.46614159616359]
Effort in releasing large-scale datasets may be compromised by privacy and intellectual property considerations.
A feasible alternative is to release pre-trained models instead.
We propose an image translation approach that transfers the style of target images to that of unseen source images.
arXiv Detail & Related papers (2020-08-17T17:57:33Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.