Category Contrast for Unsupervised Domain Adaptation in Visual Tasks
- URL: http://arxiv.org/abs/2106.02885v2
- Date: Tue, 8 Jun 2021 03:08:14 GMT
- Title: Category Contrast for Unsupervised Domain Adaptation in Visual Tasks
- Authors: Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu, Ling Shao
- Abstract summary: We propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top of instance discrimination for visual UDA tasks.
CaCo is complementary to existing UDA methods and generalizable to other learning setups such as semi-supervised learning, unsupervised model adaptation, etc.
- Score: 92.9990560760593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance contrast for unsupervised representation learning has achieved great
success in recent years. In this work, we explore the idea of instance
contrastive learning in unsupervised domain adaptation (UDA) and propose a
novel Category Contrast technique (CaCo) that introduces semantic priors on top
of instance discrimination for visual UDA tasks. By considering instance
contrastive learning as a dictionary look-up operation, we construct a
semantics-aware dictionary with samples from both source and target domains
where each target sample is assigned a (pseudo) category label based on the
category priors of source samples. This allows category contrastive learning
(between target queries and the category-level dictionary) for
category-discriminative yet domain-invariant feature representations: samples
of the same category (from either source or target domain) are pulled closer
while those of different categories are pushed apart simultaneously. Extensive
UDA experiments in multiple visual tasks ($e.g.$, segmentation, classification
and detection) show that the simple implementation of CaCo achieves superior
performance as compared with the highly-optimized state-of-the-art methods.
Analytically and empirically, the experiments also demonstrate that CaCo is
complementary to existing UDA methods and generalizable to other learning
setups such as semi-supervised learning, unsupervised model adaptation, etc.
Related papers
- Controlling Out-of-Domain Gaps in LLMs for Genre Classification and Generated Text Detection [0.20482269513546458]
This study demonstrates that the modern generation of Large Language Models (LLMs) suffers from the same out-of-domain (OOD) performance gap observed in prior research on pre-trained Language Models (PLMs)
We introduce a method that controls which predictive indicators are used and which are excluded during classification.
This approach reduces the OOD gap by up to 20 percentage points in a few-shot setup.
arXiv Detail & Related papers (2024-12-29T21:54:39Z) - Category-Adaptive Cross-Modal Semantic Refinement and Transfer for Open-Vocabulary Multi-Label Recognition [59.203152078315235]
We propose a novel category-adaptive cross-modal semantic refinement and transfer (C$2$SRT) framework to explore the semantic correlation.
The proposed framework consists of two complementary modules, i.e., intra-category semantic refinement (ISR) module and inter-category semantic transfer (IST) module.
Experiments on OV-MLR benchmarks clearly demonstrate that the proposed C$2$SRT framework outperforms current state-of-the-art algorithms.
arXiv Detail & Related papers (2024-12-09T04:00:18Z) - Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Negative Prototypes Guided Contrastive Learning for WSOD [8.102080369924911]
Weakly Supervised Object Detection (WSOD) with only image-level annotation has recently attracted wide attention.
We propose the Negative Prototypes Guided Contrastive learning architecture.
Our proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T08:16:26Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Domain Adaptive Nuclei Instance Segmentation and Classification via
Category-aware Feature Alignment and Pseudo-labelling [65.40672505658213]
We propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification.
Our approach outperforms state-of-the-art UDA methods with a remarkable margin.
arXiv Detail & Related papers (2022-07-04T07:05:06Z) - Explicitly Modeling the Discriminability for Instance-Aware Visual
Object Tracking [13.311777431243296]
We propose a novel Instance-Aware Tracker (IAT) to excavate the discriminability of feature representations.
We implement two variants of the proposed IAT, including a video-level one and an object-level one.
Both versions achieve leading results against state-of-the-art methods while running at 30FPS.
arXiv Detail & Related papers (2021-10-28T11:24:01Z) - Modeling Discriminative Representations for Out-of-Domain Detection with
Supervised Contrastive Learning [16.77134235390429]
Key challenge of OOD detection is to learn discriminative semantic features.
We propose a supervised contrastive learning objective to minimize intra-class variance.
We employ an adversarial augmentation mechanism to obtain pseudo diverse views of a sample.
arXiv Detail & Related papers (2021-05-29T12:54:22Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.