Category Contrast for Unsupervised Domain Adaptation in Visual Tasks
- URL: http://arxiv.org/abs/2106.02885v2
- Date: Tue, 8 Jun 2021 03:08:14 GMT
- Title: Category Contrast for Unsupervised Domain Adaptation in Visual Tasks
- Authors: Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu, Ling Shao
- Abstract summary: We propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top of instance discrimination for visual UDA tasks.
CaCo is complementary to existing UDA methods and generalizable to other learning setups such as semi-supervised learning, unsupervised model adaptation, etc.
- Score: 92.9990560760593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance contrast for unsupervised representation learning has achieved great
success in recent years. In this work, we explore the idea of instance
contrastive learning in unsupervised domain adaptation (UDA) and propose a
novel Category Contrast technique (CaCo) that introduces semantic priors on top
of instance discrimination for visual UDA tasks. By considering instance
contrastive learning as a dictionary look-up operation, we construct a
semantics-aware dictionary with samples from both source and target domains
where each target sample is assigned a (pseudo) category label based on the
category priors of source samples. This allows category contrastive learning
(between target queries and the category-level dictionary) for
category-discriminative yet domain-invariant feature representations: samples
of the same category (from either source or target domain) are pulled closer
while those of different categories are pushed apart simultaneously. Extensive
UDA experiments in multiple visual tasks ($e.g.$, segmentation, classification
and detection) show that the simple implementation of CaCo achieves superior
performance as compared with the highly-optimized state-of-the-art methods.
Analytically and empirically, the experiments also demonstrate that CaCo is
complementary to existing UDA methods and generalizable to other learning
setups such as semi-supervised learning, unsupervised model adaptation, etc.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Contextuality Helps Representation Learning for Generalized Category Discovery [5.885208652383516]
This paper introduces a novel approach to Generalized Category Discovery (GCD) by leveraging the concept of contextuality.
Our model integrates two levels of contextuality: instance-level, where nearest-neighbor contexts are utilized for contrastive learning, and cluster-level, employing contrastive learning.
The integration of the contextual information effectively improves the feature learning and thereby the classification accuracy of all categories.
arXiv Detail & Related papers (2024-07-29T07:30:41Z) - Negative Prototypes Guided Contrastive Learning for WSOD [8.102080369924911]
Weakly Supervised Object Detection (WSOD) with only image-level annotation has recently attracted wide attention.
We propose the Negative Prototypes Guided Contrastive learning architecture.
Our proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T08:16:26Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Domain Adaptive Nuclei Instance Segmentation and Classification via
Category-aware Feature Alignment and Pseudo-labelling [65.40672505658213]
We propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification.
Our approach outperforms state-of-the-art UDA methods with a remarkable margin.
arXiv Detail & Related papers (2022-07-04T07:05:06Z) - Semantic Representation and Dependency Learning for Multi-Label Image
Recognition [76.52120002993728]
We propose a novel and effective semantic representation and dependency learning (SRDL) framework to learn category-specific semantic representation for each category.
Specifically, we design a category-specific attentional regions (CAR) module to generate channel/spatial-wise attention matrices to guide model.
We also design an object erasing (OE) module to implicitly learn semantic dependency among categories by erasing semantic-aware regions.
arXiv Detail & Related papers (2022-04-08T00:55:15Z) - Explicitly Modeling the Discriminability for Instance-Aware Visual
Object Tracking [13.311777431243296]
We propose a novel Instance-Aware Tracker (IAT) to excavate the discriminability of feature representations.
We implement two variants of the proposed IAT, including a video-level one and an object-level one.
Both versions achieve leading results against state-of-the-art methods while running at 30FPS.
arXiv Detail & Related papers (2021-10-28T11:24:01Z) - Modeling Discriminative Representations for Out-of-Domain Detection with
Supervised Contrastive Learning [16.77134235390429]
Key challenge of OOD detection is to learn discriminative semantic features.
We propose a supervised contrastive learning objective to minimize intra-class variance.
We employ an adversarial augmentation mechanism to obtain pseudo diverse views of a sample.
arXiv Detail & Related papers (2021-05-29T12:54:22Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.