Domain-General Crowd Counting in Unseen Scenarios
- URL: http://arxiv.org/abs/2212.02573v2
- Date: Tue, 28 Mar 2023 10:05:20 GMT
- Title: Domain-General Crowd Counting in Unseen Scenarios
- Authors: Zhipeng Du, Jiankang Deng, Miaojing Shi
- Abstract summary: Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
- Score: 25.171343652312974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain shift across crowd data severely hinders crowd counting models to
generalize to unseen scenarios. Although domain adaptive crowd counting
approaches close this gap to a certain extent, they are still dependent on the
target domain data to adapt (e.g. finetune) their models to the specific
domain. In this paper, we aim to train a model based on a single source domain
which can generalize well on any unseen domain. This falls into the realm of
domain generalization that remains unexplored in crowd counting. We first
introduce a dynamic sub-domain division scheme which divides the source domain
into multiple sub-domains such that we can initiate a meta-learning framework
for domain generalization. The sub-domain division is dynamically refined
during the meta-learning. Next, in order to disentangle domain-invariant
information from domain-specific information in image features, we design the
domain-invariant and -specific crowd memory modules to re-encode image
features. Two types of losses, i.e. feature reconstruction and orthogonal
losses, are devised to enable this disentanglement. Extensive experiments on
several standard crowd counting benchmarks i.e. SHA, SHB, QNRF, and NWPU, show
the strong generalizability of our method.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting [67.38137379297717]
Multidomain crowd counting aims to learn a general model for multiple diverse datasets.
Deep networks prefer modeling distributions of the dominant domains instead of all domains, which is known as domain bias.
We propose a Modulating Domain-specific Knowledge Network (MDKNet) to handle the domain bias issue in multidomain crowd counting.
arXiv Detail & Related papers (2024-02-06T06:49:04Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.