Domain Adaptation and Active Learning for Fine-Grained Recognition in
the Field of Biodiversity
- URL: http://arxiv.org/abs/2110.11778v1
- Date: Fri, 22 Oct 2021 13:34:13 GMT
- Title: Domain Adaptation and Active Learning for Fine-Grained Recognition in
the Field of Biodiversity
- Authors: Bernd Gruner, Matthias K\"orschens, Bj\"orn Barz and Joachim Denzler
- Abstract summary: unsupervised domain adaptation can be used for fine-grained recognition in a biodiversity context.
Using domain adaptation and Transferable Normalization, the accuracy of the classifier could be increased by up to 12.35 %.
Surprisingly, we found that more sophisticated strategies provide better results than the random selection baseline for only one of the two datasets.
- Score: 7.24935792316121
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning methods offer unsurpassed recognition performance in a wide
range of domains, including fine-grained recognition tasks. However, in most
problem areas there are insufficient annotated training samples. Therefore, the
topic of transfer learning respectively domain adaptation is particularly
important. In this work, we investigate to what extent unsupervised domain
adaptation can be used for fine-grained recognition in a biodiversity context
to learn a real-world classifier based on idealized training data, e.g.
preserved butterflies and plants. Moreover, we investigate the influence of
different normalization layers, such as Group Normalization in combination with
Weight Standardization, on the classifier. We discovered that domain adaptation
works very well for fine-grained recognition and that the normalization methods
have a great influence on the results. Using domain adaptation and Transferable
Normalization, the accuracy of the classifier could be increased by up to 12.35
% compared to the baseline. Furthermore, the domain adaptation system is
combined with an active learning component to improve the results. We compare
different active learning strategies with each other. Surprisingly, we found
that more sophisticated strategies provide better results than the random
selection baseline for only one of the two datasets. In this case, the distance
and diversity strategy performed best. Finally, we present a problem analysis
of the datasets.
Related papers
- Stratified Domain Adaptation: A Progressive Self-Training Approach for Scene Text Recognition [1.2878987353423252]
Unsupervised domain adaptation (UDA) has become increasingly prevalent in scene text recognition (STR)
We introduce the Stratified Domain Adaptation (StrDA) approach, which examines the gradual escalation of the domain gap for the learning process.
We propose a novel method for employing domain discriminators to estimate the out-of-distribution and domain discriminative levels of data samples.
arXiv Detail & Related papers (2024-10-13T16:40:48Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Algorithms and Theory for Supervised Gradual Domain Adaptation [19.42476993856205]
We study the problem of supervised gradual domain adaptation, where labeled data from shifting distributions are available to the learner along the trajectory.
Under this setting, we provide the first generalization upper bound on the learning error under mild assumptions.
Our results are algorithm agnostic for a range of loss functions, and only depend linearly on the averaged learning error across the trajectory.
arXiv Detail & Related papers (2022-04-25T13:26:11Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Deep learning based domain adaptation for mitochondria segmentation on
EM volumes [5.682594415267948]
We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain.
We propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain.
In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models.
arXiv Detail & Related papers (2022-02-22T09:49:25Z) - Domain Generalization in Biosignal Classification [37.70077538403524]
This study is the first to investigate domain generalization for biosignal data.
Our proposed method achieves accuracy gains of up to 16% for four completely unseen domains.
arXiv Detail & Related papers (2020-11-12T05:15:46Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Fisher Deep Domain Adaptation [41.50519723389471]
Deep domain adaptation models learn a neural network in an unlabeled target domain by leveraging the knowledge from a labeled source domain.
A Fisher loss is proposed to learn discriminative representations which are within-class compact and between-class separable.
arXiv Detail & Related papers (2020-03-12T06:17:48Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.