Adversarial Consistent Learning on Partial Domain Adaptation of
PlantCLEF 2020 Challenge
- URL: http://arxiv.org/abs/2009.09289v1
- Date: Sat, 19 Sep 2020 19:57:41 GMT
- Title: Adversarial Consistent Learning on Partial Domain Adaptation of
PlantCLEF 2020 Challenge
- Authors: Youshan Zhang and Brian D. Davison
- Abstract summary: We develop adversarial consistent learning ($ACL$) in a unified deep architecture for partial domain adaptation.
It consists of source domain classification loss, adversarial learning loss, and feature consistency loss.
We find the shared categories of two domains via down-weighting the irrelevant categories in the source domain.
- Score: 26.016647703500883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation is one of the most crucial techniques to mitigate the
domain shift problem, which exists when transferring knowledge from an abundant
labeled sourced domain to a target domain with few or no labels. Partial domain
adaptation addresses the scenario when target categories are only a subset of
source categories. In this paper, to enable the efficient representation of
cross-domain plant images, we first extract deep features from pre-trained
models and then develop adversarial consistent learning ($ACL$) in a unified
deep architecture for partial domain adaptation. It consists of source domain
classification loss, adversarial learning loss, and feature consistency loss.
Adversarial learning loss can maintain domain-invariant features between the
source and target domains. Moreover, feature consistency loss can preserve the
fine-grained feature transition between two domains. We also find the shared
categories of two domains via down-weighting the irrelevant categories in the
source domain. Experimental results demonstrate that training features from
NASNetLarge model with proposed $ACL$ architecture yields promising results on
the PlantCLEF 2020 Challenge.
Related papers
- Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation [4.7589762171821715]
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples.
We introduce a data-efficient, CLIP-powered dual-branch network (CDBN) to address the dual challenges of limited source data and privacy concerns.
CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
arXiv Detail & Related papers (2024-10-21T09:25:49Z) - Cross-domain Transfer of defect features in technical domains based on
partial target data [0.0]
In many technical domains, it is only the defect or worn reject classes that are insufficiently represented.
The proposed classification approach addresses such conditions and is based on a CNN encoder.
It is benchmarked in a technical and a non-technical domain and shows convincing classification results.
arXiv Detail & Related papers (2022-11-24T15:23:58Z) - Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Vicinal and categorical domain adaptation [43.707303372718336]
We propose novel losses of adversarial training at both domain and category levels.
We propose a concept of vicinal domains whose instances are produced by a convex combination of pairs of instances respectively from the two domains.
arXiv Detail & Related papers (2021-03-05T03:47:24Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.