Multi-Source Domain Adaptation for Cross-Domain Fault Diagnosis of
Chemical Processes
- URL: http://arxiv.org/abs/2308.11247v1
- Date: Tue, 22 Aug 2023 07:43:59 GMT
- Title: Multi-Source Domain Adaptation for Cross-Domain Fault Diagnosis of
Chemical Processes
- Authors: Eduardo Fernandes Montesuma, Michela Mulas, Fred Ngol\`e Mboula,
Francesco Corona, Antoine Souloumiac
- Abstract summary: We provide an extensive comparison of single and multi-source unsupervised domain adaptation algorithms for Cross-Domain Fault Diagnosis (CDFD)
We show that using multiple domains during training has a positive effect, even when no adaptation is employed.
In addition, under the multiple-sources scenario, we improve classification accuracy of the no adaptation setting by 8.4% on average.
- Score: 5.119371135458389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fault diagnosis is an essential component in process supervision. Indeed, it
determines which kind of fault has occurred, given that it has been previously
detected, allowing for appropriate intervention. Automatic fault diagnosis
systems use machine learning for predicting the fault type from sensor
readings. Nonetheless, these models are sensible to changes in the data
distributions, which may be caused by changes in the monitored process, such as
changes in the mode of operation. This scenario is known as Cross-Domain Fault
Diagnosis (CDFD). We provide an extensive comparison of single and multi-source
unsupervised domain adaptation (SSDA and MSDA respectively) algorithms for
CDFD. We study these methods in the context of the Tennessee-Eastmann Process,
a widely used benchmark in the chemical industry. We show that using multiple
domains during training has a positive effect, even when no adaptation is
employed. As such, the MSDA baseline improves over the SSDA baseline
classification accuracy by 23% on average. In addition, under the
multiple-sources scenario, we improve classification accuracy of the no
adaptation setting by 8.4% on average.
Related papers
- SelectiveFinetuning: Enhancing Transfer Learning in Sleep Staging through Selective Domain Alignment [3.5833494449195293]
In practical sleep stage classification, a key challenge is the variability of EEG data across different subjects and environments.
Our method utilizes a pretrained Multi Resolution Convolutional Neural Network (MRCNN) to extract EEG features.
By finetuning the model with selective source data, our SelectiveFinetuning enhances the model's performance on target domain.
arXiv Detail & Related papers (2025-01-07T13:08:54Z) - Recursive Gaussian Process State Space Model [4.572915072234487]
We propose a new online GPSSM method with adaptive capabilities for both operating domains and GP hyper parameters.
Online selection algorithm for inducing points is developed based on informative criteria to achieve lightweight learning.
Comprehensive evaluations on both synthetic and real-world datasets demonstrate the superior accuracy, computational efficiency, and adaptability of our method.
arXiv Detail & Related papers (2024-11-22T02:22:59Z) - Optimal Transport for Domain Adaptation through Gaussian Mixture Models [7.292229955481438]
Machine learning systems operate under the assumption that training and test data are sampled from a fixed probability distribution.
In this work, we explore optimal transport between Gaussian Mixture Models (GMMs), which is conveniently written in terms of the components of source and target GMMs.
We experiment with 9 benchmarks, with a total of $85$ adaptation tasks, showing that our methods are more efficient than previous shallow domain adaptation methods.
arXiv Detail & Related papers (2024-03-18T09:32:33Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Benchmarking Test-Time Adaptation against Distribution Shifts in Image
Classification [77.0114672086012]
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of models by leveraging unlabeled samples solely during prediction.
We present a benchmark that systematically evaluates 13 prominent TTA methods and their variants on five widely used image classification datasets.
arXiv Detail & Related papers (2023-07-06T16:59:53Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - Incremental Unsupervised Domain-Adversarial Training of Neural Networks [17.91571291302582]
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
arXiv Detail & Related papers (2020-01-13T09:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.