Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network
- URL: http://arxiv.org/abs/2107.12858v1
- Date: Tue, 27 Jul 2021 14:47:24 GMT
- Title: Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network
- Authors: Zhikang Zou, Xiaoye Qu, Pan Zhou, Shuangjie Xu, Xiaoqing Ye, Wenhao
Wu, Jin Ye
- Abstract summary: This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
- Score: 58.05473757538834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent deep networks have convincingly demonstrated high capability in crowd
counting, which is a critical task attracting widespread attention due to its
various industrial applications. Despite such progress, trained data-dependent
models usually can not generalize well to unseen scenarios because of the
inherent domain shift. To facilitate this issue, this paper proposes a novel
adversarial scoring network (ASNet) to gradually bridge the gap across domains
from coarse to fine granularity. In specific, at the coarse-grained stage, we
design a dual-discriminator strategy to adapt source domain to be close to the
targets from the perspectives of both global and local feature space via
adversarial learning. The distributions between two domains can thus be aligned
roughly. At the fine-grained stage, we explore the transferability of source
characteristics by scoring how similar the source samples are to target ones
from multiple levels based on generative probability derived from coarse stage.
Guided by these hierarchical scores, the transferable source features are
properly selected to enhance the knowledge transfer during the adaptation
process. With the coarse-to-fine design, the generalization bottleneck induced
from the domain discrepancy can be effectively alleviated. Three sets of
migration experiments show that the proposed methods achieve state-of-the-art
counting performance compared with major unsupervised methods.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Multi-Anchor Active Domain Adaptation for Semantic Segmentation [25.93409207335442]
Unsupervised domain adaption has proven to be an effective approach for alleviating the intensive workload of manual annotation.
We propose to introduce a novel multi-anchor based active learning strategy to assist domain adaptation regarding the semantic segmentation task.
arXiv Detail & Related papers (2021-08-18T07:33:13Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.