Neuron Linear Transformation: Modeling the Domain Shift for Crowd
Counting
- URL: http://arxiv.org/abs/2004.02133v2
- Date: Thu, 14 Jan 2021 05:51:00 GMT
- Title: Neuron Linear Transformation: Modeling the Domain Shift for Crowd
Counting
- Authors: Qi Wang, Tao Han, Junyu Gao, Yuan Yuan
- Abstract summary: Cross-domain crowd counting (CDCC) is a hot topic due to its importance in public safety.
We propose a Neuron Linear Transformation (NLT) method, exploiting domain factor and bias weights to learn the domain shift.
Extensive experiments and analysis on six real-world datasets validate that NLT achieves top performance.
- Score: 34.560447389853614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain crowd counting (CDCC) is a hot topic due to its importance in
public safety. The purpose of CDCC is to alleviate the domain shift between the
source and target domain. Recently, typical methods attempt to extract
domain-invariant features via image translation and adversarial learning. When
it comes to specific tasks, we find that the domain shifts are reflected on
model parameters' differences. To describe the domain gap directly at the
parameter-level, we propose a Neuron Linear Transformation (NLT) method,
exploiting domain factor and bias weights to learn the domain shift.
Specifically, for a specific neuron of a source model, NLT exploits few labeled
target data to learn domain shift parameters. Finally, the target neuron is
generated via a linear transformation. Extensive experiments and analysis on
six real-world datasets validate that NLT achieves top performance compared
with other domain adaptation methods. An ablation study also shows that the NLT
is robust and more effective than supervised and fine-tune training. Code is
available at: \url{https://github.com/taohan10200/NLT}.
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - Towards Unsupervised Domain Adaptation via Domain-Transformer [0.0]
We propose the Domain-Transformer (DoT) for Unsupervised Domain Adaptation (UDA)
DoT integrates the CNN-backbones and the core attention mechanism of Transformers from a new perspective.
It achieves the local semantic consistency across domains, where the domain-level attention and manifold regularization are explored.
arXiv Detail & Related papers (2022-02-24T02:30:15Z) - Domain-Invariant Proposals based on a Balanced Domain Classifier for
Object Detection [8.583307102907295]
Object recognition from images means to automatically find object(s) of interest and to return their category and location information.
Benefiting from research on deep learning, like convolutional neural networks(CNNs) and generative adversarial networks, the performance in this field has been improved significantly.
mismatching distributions, i.e., domain shifts, lead to a significant performance drop.
arXiv Detail & Related papers (2022-02-12T00:21:27Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.