DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting
- URL: http://arxiv.org/abs/2308.05311v1
- Date: Thu, 10 Aug 2023 02:59:40 GMT
- Title: DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting
- Authors: Huilin Zhu, Jingling Yuan, Xian Zhong, Zhengwei Yang, Zheng Wang, and
Shengfeng He
- Abstract summary: Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
Existing domain adaptation methods tend to focus on inter-dataset differences while overlooking the intra-differences within the same dataset.
We propose a Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns domain-agnostic factors between domains.
- Score: 35.83485358725357
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Domain adaptation is commonly employed in crowd counting to bridge the domain
gaps between different datasets. However, existing domain adaptation methods
tend to focus on inter-dataset differences while overlooking the
intra-differences within the same dataset, leading to additional learning
ambiguities. These domain-agnostic factors, e.g., density, surveillance
perspective, and scale, can cause significant in-domain variations, and the
misalignment of these factors across domains can lead to a drop in performance
in cross-domain crowd counting. To address this issue, we propose a
Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns
domain-agnostic factors between domains. The DAOT consists of three steps.
First, individual-level differences in domain-agnostic factors are measured
using structural similarity (SSIM). Second, the optimal transfer (OT) strategy
is employed to smooth out these differences and find the optimal
domain-to-domain misalignment, with outlier individuals removed via a virtual
"dustbin" column. Third, knowledge is transferred based on the aligned
domain-agnostic factors, and the model is retrained for domain adaptation to
bridge the gap across domains. We conduct extensive experiments on five
standard crowd-counting benchmarks and demonstrate that the proposed method has
strong generalizability across diverse datasets. Our code will be available at:
https://github.com/HopooLinZ/DAOT/.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Domain Adaptation for Sentiment Analysis Using Increased Intraclass
Separation [31.410122245232373]
Cross-domain sentiment analysis methods have received significant attention.
We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.
This embedding space is trained to be domain-agnostic by matching the data distributions across the domains.
arXiv Detail & Related papers (2021-07-04T11:39:12Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Improved Multi-Source Domain Adaptation by Preservation of Factors [0.0]
Domain Adaptation (DA) is a highly relevant research topic when it comes to image classification with deep neural networks.
In this paper, we describe based on a theory of visual factors how real-world scenes appear in images in general.
We show that different domains can be described by a set of so called domain factors, whose values are consistent within a domain, but can change across domains.
arXiv Detail & Related papers (2020-10-15T14:19:57Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.