Universal Cross-Domain Retrieval: Generalizing Across Classes and
Domains
- URL: http://arxiv.org/abs/2108.08356v1
- Date: Wed, 18 Aug 2021 19:21:04 GMT
- Title: Universal Cross-Domain Retrieval: Generalizing Across Classes and
Domains
- Authors: Soumava Paul, Titir Dutta, Soma Biswas
- Abstract summary: We propose SnMpNet, which incorporates two novel losses to account for the unseen classes and domains encountered during testing.
Specifically, we introduce a novel Semantic Neighborhood loss to bridge the knowledge gap between seen and unseen classes.
We also introduce a mix-up based supervision at image-level as well as semantic-level of the data for training with the Mixture Prediction loss.
- Score: 27.920212868483702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, for the first time, we address the problem of universal
cross-domain retrieval, where the test data can belong to classes or domains
which are unseen during training. Due to dynamically increasing number of
categories and practical constraint of training on every possible domain, which
requires large amounts of data, generalizing to both unseen classes and domains
is important. Towards that goal, we propose SnMpNet (Semantic Neighbourhood and
Mixture Prediction Network), which incorporates two novel losses to account for
the unseen classes and domains encountered during testing. Specifically, we
introduce a novel Semantic Neighborhood loss to bridge the knowledge gap
between seen and unseen classes and ensure that the latent space embedding of
the unseen classes is semantically meaningful with respect to its neighboring
classes. We also introduce a mix-up based supervision at image-level as well as
semantic-level of the data for training with the Mixture Prediction loss, which
helps in efficient retrieval when the query belongs to an unseen domain. These
losses are incorporated on the SE-ResNet50 backbone to obtain SnMpNet.
Extensive experiments on two large-scale datasets, Sketchy Extended and
DomainNet, and thorough comparisons with state-of-the-art justify the
effectiveness of the proposed model.
Related papers
- MemSAC: Memory Augmented Sample Consistency for Large Scale Unsupervised
Domain Adaptation [71.4942277262067]
We propose MemSAC, which exploits sample level similarity across source and target domains to achieve discriminative transfer.
We provide in-depth analysis and insights into the effectiveness of MemSAC.
arXiv Detail & Related papers (2022-07-25T17:55:28Z) - Feature Representation Learning for Unsupervised Cross-domain Image
Retrieval [73.3152060987961]
Current supervised cross-domain image retrieval methods can achieve excellent performance.
The cost of data collection and labeling imposes an intractable barrier to practical deployment in real applications.
We introduce a new cluster-wise contrastive learning mechanism to help extract class semantic-aware features.
arXiv Detail & Related papers (2022-07-20T07:52:14Z) - Few-Shot Object Detection in Unseen Domains [4.36080478413575]
Few-shot object detection (FSOD) has thrived in recent years to learn novel object classes with limited data.
We propose various data augmentations techniques on the few shots of novel classes to account for all possible domain-specific information.
Our experiments on the T-LESS dataset show that the proposed approach succeeds in alleviating the domain gap considerably.
arXiv Detail & Related papers (2022-04-11T13:16:41Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Semi-supervised Domain Adaptation based on Dual-level Domain Mixing for
Semantic Segmentation [34.790169990156684]
We focus on a more practical setting of semi-supervised domain adaptation (SSDA) where both a small set of labeled target data and large amounts of labeled source data are available.
Two kinds of data mixing methods are proposed to reduce domain gap in both region-level and sample-level respectively.
We can obtain two complementary domain-mixed teachers based on dual-level mixed data from holistic and partial views respectively.
arXiv Detail & Related papers (2021-03-08T12:33:17Z) - Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification [16.229317527580072]
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods.
We propose a mixup regularized adversarial network (MRAN) to address these two issues.
arXiv Detail & Related papers (2021-01-31T15:24:05Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - CMT in TREC-COVID Round 2: Mitigating the Generalization Gaps from Web
to Special Domain Search [89.48123965553098]
This paper presents a search system to alleviate the special domain adaption problem.
The system utilizes the domain-adaptive pretraining and few-shot learning technologies to help neural rankers mitigate the domain discrepancy.
Our system performs the best among the non-manual runs in Round 2 of the TREC-COVID task.
arXiv Detail & Related papers (2020-11-03T09:10:48Z) - Universal-RCNN: Universal Object Detector via Transferable Graph R-CNN [117.80737222754306]
We present a novel universal object detector called Universal-RCNN.
We first generate a global semantic pool by integrating all high-level semantic representation of all the categories.
An Intra-Domain Reasoning Module learns and propagates the sparse graph representation within one dataset guided by a spatial-aware GCN.
arXiv Detail & Related papers (2020-02-18T07:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.