Cross-domain Transfer of defect features in technical domains based on
partial target data
- URL: http://arxiv.org/abs/2211.13662v3
- Date: Wed, 24 May 2023 19:34:53 GMT
- Title: Cross-domain Transfer of defect features in technical domains based on
partial target data
- Authors: Tobias Schlagenhauf, Tim Scheurenbrand
- Abstract summary: In many technical domains, it is only the defect or worn reject classes that are insufficiently represented.
The proposed classification approach addresses such conditions and is based on a CNN encoder.
It is benchmarked in a technical and a non-technical domain and shows convincing classification results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A common challenge in real world classification scenarios with sequentially
appending target domain data is insufficient training datasets during the
training phase. Therefore, conventional deep learning and transfer learning
classifiers are not applicable especially when individual classes are not
represented or are severely underrepresented at the outset. In many technical
domains, however, it is only the defect or worn reject classes that are
insufficiently represented, while the non-defect class is often available from
the beginning. The proposed classification approach addresses such conditions
and is based on a CNN encoder. Following a contrastive learning approach, it is
trained with a modified triplet loss function using two datasets: Besides the
non-defective target domain class 1st dataset, a state-of-the-art labeled
source domain dataset that contains highly related classes e.g., a related
manufacturing error or wear defect but originates from a highly different
domain e.g., different product, material, or appearance = 2nd dataset is
utilized. The approach learns the classification features from the source
domain dataset while at the same time learning the differences between the
source and the target domain in a single training step, aiming to transfer the
relevant features to the target domain. The classifier becomes sensitive to the
classification features and by architecture robust against the highly
domain-specific context. The approach is benchmarked in a technical and a
non-technical domain and shows convincing classification results. In
particular, it is shown that the domain generalization capabilities and
classification results are improved by the proposed architecture, allowing for
larger domain shifts between source and target domains.
Related papers
- Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation [4.7589762171821715]
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples.
We introduce a data-efficient, CLIP-powered dual-branch network (CDBN) to address the dual challenges of limited source data and privacy concerns.
CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
arXiv Detail & Related papers (2024-10-21T09:25:49Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Weak Adaptation Learning -- Addressing Cross-domain Data Insufficiency
with Weak Annotator [2.8672054847109134]
In some target problem domains, there are not many data samples available, which could hinder the learning process.
We propose a weak adaptation learning (WAL) approach that leverages unlabeled data from a similar source domain.
Our experiments demonstrate the effectiveness of our approach in learning an accurate classifier with limited labeled data in the target domain.
arXiv Detail & Related papers (2021-02-15T06:19:25Z) - Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation [17.819949636876018]
Open set domain adaptation refers to the scenario that the target domain contains categories that do not exist in the source domain.
We propose an "against adversarial learning" method that can distinguish unknown target data and known data naturally.
Experimental results show that the proposed method can make significant improvement in performance compared with several state-of-the-art methods.
arXiv Detail & Related papers (2020-11-04T10:30:43Z) - Adversarial Consistent Learning on Partial Domain Adaptation of
PlantCLEF 2020 Challenge [26.016647703500883]
We develop adversarial consistent learning ($ACL$) in a unified deep architecture for partial domain adaptation.
It consists of source domain classification loss, adversarial learning loss, and feature consistency loss.
We find the shared categories of two domains via down-weighting the irrelevant categories in the source domain.
arXiv Detail & Related papers (2020-09-19T19:57:41Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.