Addressing materials' microstructure diversity using transfer learning
- URL: http://arxiv.org/abs/2107.13841v1
- Date: Thu, 29 Jul 2021 09:13:11 GMT
- Title: Addressing materials' microstructure diversity using transfer learning
- Authors: Aur\`ele Goetz, Ali Riza Durmaz, Martin M\"uller, Akhil Thomas,
Dominik Britz, Pierre Kerfriden and Chris Eberl
- Abstract summary: This study is conducted on a lath-shaped bainite segmentation task in complex phase steel micrographs.
We show that a state-of-the-art UDA approach surpasses the na"ive application of source domain trained models on the target domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Materials' microstructures are signatures of their alloying composition and
processing history. Therefore, microstructures exist in a wide variety. As
materials become increasingly complex to comply with engineering demands,
advanced computer vision (CV) approaches such as deep learning (DL) inevitably
gain relevance for quantifying microstrucutures' constituents from micrographs.
While DL can outperform classical CV techniques for many tasks, shortcomings
are poor data efficiency and generalizability across datasets. This is
inherently in conflict with the expense associated with annotating materials
data through experts and extensive materials diversity. To tackle poor domain
generalizability and the lack of labeled data simultaneously, we propose to
apply a sub-class of transfer learning methods called unsupervised domain
adaptation (UDA). These algorithms address the task of finding domain-invariant
features when supplied with annotated source data and unannotated target data,
such that performance on the latter distribution is optimized despite the
absence of annotations. Exemplarily, this study is conducted on a lath-shaped
bainite segmentation task in complex phase steel micrographs. Here, the domains
to bridge are selected to be different metallographic specimen preparations
(surface etchings) and distinct imaging modalities. We show that a
state-of-the-art UDA approach surpasses the na\"ive application of source
domain trained models on the target domain (generalization baseline) to a large
extent. This holds true independent of the domain shift, despite using little
data, and even when the baseline models were pre-trained or employed data
augmentation. Through UDA, mIoU was improved over generalization baselines from
82.2%, 61.0%, 49.7% to 84.7%, 67.3%, 73.3% on three target datasets,
respectively. This underlines this techniques' potential to cope with materials
variance.
Related papers
- Semantics-Aware Generative Latent Data Augmentation for Learning in Low-Resource Domains [27.911250327145115]
We propose GeLDA, a semantics-aware generative latent data augmentation framework.<n>Because this space is low-dimensional and concentrates task-relevant information compared to the input space, GeLDA enables efficient, high-quality data generation.<n>We validate GeLDA in two large-scale recognition tasks: (a) in zero-shot language-specific speech emotion recognition, GeLDA improves the Whisper-large baseline's unweighted average recall by 6.13%; and (b) in long-tailed image classification, it achieves 74.7% tail-class accuracy on ImageNet-LT.
arXiv Detail & Related papers (2026-02-02T21:43:54Z) - Active transfer learning for structural health monitoring [0.0]
Population-based SHM aims to address this limitation by leveraging data from multiple structures.<n>Data from different structures will follow distinct distributions, potentially leading to large generalisation errors for models learnt via conventional machine learning methods.<n>This paper proposes a Bayesian framework for DA in PBSHM, that can improve unsupervised DA mappings using a limited quantity of labelled target data.
arXiv Detail & Related papers (2025-10-31T14:54:40Z) - DIDS: Domain Impact-aware Data Sampling for Large Language Model Training [61.10643823069603]
We present Domain Impact-aware Data Sampling (DIDS) for large language models.<n>DIDS group training data based on learning effects, where a proxy language model and dimensionality reduction are employed.<n>It achieves 3.4% higher average performance while maintaining comparable training efficiency.
arXiv Detail & Related papers (2025-04-17T13:09:38Z) - CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - DA-VEGAN: Differentiably Augmenting VAE-GAN for microstructure
reconstruction from extremely small data sets [110.60233593474796]
DA-VEGAN is a model with two central innovations.
A $beta$-variational autoencoder is incorporated into a hybrid GAN architecture.
A custom differentiable data augmentation scheme is developed specifically for this architecture.
arXiv Detail & Related papers (2023-02-17T08:49:09Z) - Heterogeneous Domain Adaptation and Equipment Matching: DANN-based
Alignment with Cyclic Supervision (DBACS) [3.4519649635864584]
This work introduces the Domain Adaptation Neural Network with Cyclic Supervision (DBACS) approach.
DBACS addresses the issue of model generalization through domain adaptation, specifically for heterogeneous data.
This work also includes subspace alignment and a multi-view learning that deals with heterogeneous representations.
arXiv Detail & Related papers (2023-01-03T10:56:25Z) - Synthetic-to-Real Domain Generalized Semantic Segmentation for 3D Indoor
Point Clouds [69.64240235315864]
This paper introduces the synthetic-to-real domain generalization setting to this task.
The domain gap between synthetic and real-world point cloud data mainly lies in the different layouts and point patterns.
Experiments on the synthetic-to-real benchmark demonstrate that both CINMix and multi-prototypes can narrow the distribution gap.
arXiv Detail & Related papers (2022-12-09T05:07:43Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Deep Transfer Learning for Multi-source Entity Linkage via Domain
Adaptation [63.24594955429465]
Multi-source entity linkage is critical in high-impact applications such as data cleaning and user stitching.
AdaMEL is a deep transfer learning framework that learns generic high-level knowledge to perform multi-source entity linkage.
Our framework achieves state-of-the-art results with 8.21% improvement on average over methods based on supervised learning.
arXiv Detail & Related papers (2021-10-27T15:20:41Z) - Embracing the Disharmony in Heterogeneous Medical Data [12.739380441313022]
Heterogeneity in medical imaging data is often tackled, in the context of machine learning, using domain invariance.
This paper instead embraces the heterogeneity and treats it as a multi-task learning problem.
We show that this approach improves classification accuracy by 5-30 % across different datasets on the main classification tasks.
arXiv Detail & Related papers (2021-03-23T21:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.