Model-based Transfer Learning for Automatic Optical Inspection based on
domain discrepancy
- URL: http://arxiv.org/abs/2301.05897v1
- Date: Sat, 14 Jan 2023 11:32:39 GMT
- Title: Model-based Transfer Learning for Automatic Optical Inspection based on
domain discrepancy
- Authors: Erik Isai Valle Salgado, Haoxin Yan, Yue Hong, Peiyuan Zhu, Shidong
Zhu, Chengwei Liao, Yanxiang Wen, Xiu Li, Xiang Qian, Xiaohao Wang, Xinghui
Li
- Abstract summary: This research applies model-based TL via domain similarity to improve the overall performance and data augmentation in both target and source domains.
Our research suggests increases in the F1 score and the PR curve up to 20% compared with TL using benchmark datasets.
- Score: 9.039797705929363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning is a promising method for AOI applications since it can
significantly shorten sample collection time and improve efficiency in today's
smart manufacturing. However, related research enhanced the network models by
applying TL without considering the domain similarity among datasets, the data
long-tailedness of a source dataset, and mainly used linear transformations to
mitigate the lack of samples. This research applies model-based TL via domain
similarity to improve the overall performance and data augmentation in both
target and source domains to enrich the data quality and reduce the imbalance.
Given a group of source datasets from similar industrial processes, we define
which group is the most related to the target through the domain discrepancy
score and the number of samples each has. Then, we transfer the chosen
pre-trained backbone weights to train and fine-tune the target network. Our
research suggests increases in the F1 score and the PR curve up to 20% compared
with TL using benchmark datasets.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Selecting Subsets of Source Data for Transfer Learning with Applications
in Metal Additive Manufacturing [1.9116784879310036]
This paper proposes a systematic method to find appropriate subsets of source data based on similarities between the source and target datasets for a given set of limited target domain data.
The proposed method can find a small subset of source data from the same domain with better TL performance in metal AM regression tasks involving different processes and machines.
arXiv Detail & Related papers (2024-01-16T00:14:37Z) - Comparison of Transfer Learning based Additive Manufacturing Models via
A Case Study [3.759936323189418]
This paper defines a case study based on an open-source dataset about metal AM products.
Five TL methods are integrated with decision tree regression (DTR) and artificial neural network (ANN) to construct six TL-based models.
The comparisons are used to quantify the performance of applied TL methods and are discussed from the perspective of similarity, training data size, and data preprocessing.
arXiv Detail & Related papers (2023-05-17T00:29:25Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Probing transfer learning with a model of synthetic correlated datasets [11.53207294639557]
Transfer learning can significantly improve the sample efficiency of neural networks.
We re-think a solvable model of synthetic data as a framework for modeling correlation between data-sets.
We show that our model can capture a range of salient features of transfer learning with real data.
arXiv Detail & Related papers (2021-06-09T22:15:41Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.