Probabilistic transfer learning methodology to expedite high fidelity simulation of reactive flows
- URL: http://arxiv.org/abs/2405.10944v1
- Date: Fri, 17 May 2024 17:58:52 GMT
- Title: Probabilistic transfer learning methodology to expedite high fidelity simulation of reactive flows
- Authors: Bruno S. Soriano, Ki Sung Jung, Tarek Echekki, Jacqueline H. Chen, Mohammad Khalil,
- Abstract summary: This study presents a novel probabilistic transfer learning (TL) framework to enhance the trust in Machine Learning (ML) models.
The framework uses Bayesian neural networks, and autoencoders, to reduce the dimensionality of the state space.
The results reveal that there is an optimal amount of knowledge to be transferred, which depends on the amount of data available in the target domain.
- Score: 0.3495246564946556
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reduced order models based on the transport of a lower dimensional manifold representation of the thermochemical state, such as Principal Component (PC) transport and Machine Learning (ML) techniques, have been developed to reduce the computational cost associated with the Direct Numerical Simulations (DNS) of reactive flows. Both PC transport and ML normally require an abundance of data to exhibit sufficient predictive accuracy, which might not be available due to the prohibitive cost of DNS or experimental data acquisition. To alleviate such difficulties, similar data from an existing dataset or domain (source domain) can be used to train ML models, potentially resulting in adequate predictions in the domain of interest (target domain). This study presents a novel probabilistic transfer learning (TL) framework to enhance the trust in ML models in correctly predicting the thermochemical state in a lower dimensional manifold and a sparse data setting. The framework uses Bayesian neural networks, and autoencoders, to reduce the dimensionality of the state space and diffuse the knowledge from the source to the target domain. The new framework is applied to one-dimensional freely-propagating flame solutions under different data sparsity scenarios. The results reveal that there is an optimal amount of knowledge to be transferred, which depends on the amount of data available in the target domain and the similarity between the domains. TL can reduce the reconstruction error by one order of magnitude for cases with large sparsity. The new framework required 10 times less data for the target domain to reproduce the same error as in the abundant data scenario. Furthermore, comparisons with a state-of-the-art deterministic TL strategy show that the probabilistic method can require four times less data to achieve the same reconstruction error.
Related papers
- Fusing CFD and measurement data using transfer learning [49.1574468325115]
We introduce a non-linear method based on neural networks combining simulation and measurement data via transfer learning.<n>In a first step, the neural network is trained on simulation data to learn spatial features of the distributed quantities.<n>The second step involves transfer learning on the measurement data to correct for systematic errors between simulation and measurement by only re-training a small subset of the entire neural network model.
arXiv Detail & Related papers (2025-07-28T07:21:46Z) - Data-Efficient Machine Learning Potentials via Difference Vectors Based on Local Atomic Environments [13.307935336307475]
We propose a new method--Difference Vectors based on Local Atomic Environments (DV-LAE)<n>DV-LAE encodes structural differences via histogram-based descriptors and enables visual analysis through t-S dimensionality reduction.<n>We demonstrate that DV-LAE significantly reduces dataset size and training time across various materials systems.
arXiv Detail & Related papers (2025-01-27T04:08:37Z) - An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Optimal Transport for Domain Adaptation through Gaussian Mixture Models [7.292229955481438]
Machine learning systems operate under the assumption that training and test data are sampled from a fixed probability distribution.
In this work, we explore optimal transport between Gaussian Mixture Models (GMMs), which is conveniently written in terms of the components of source and target GMMs.
We experiment with 9 benchmarks, with a total of $85$ adaptation tasks, showing that our methods are more efficient than previous shallow domain adaptation methods.
arXiv Detail & Related papers (2024-03-18T09:32:33Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Transfer learning suppresses simulation bias in predictive models built
from sparse, multi-modal data [15.587831925516957]
Many problems in science, engineering, and business require making predictions based on very few observations.
To build a robust predictive model, these sparse data may need to be augmented with simulated data, especially when the design space is multidimensional.
We combine recent developments in deep learning to build more robust predictive models from multimodal data.
arXiv Detail & Related papers (2021-04-19T23:28:32Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - A probabilistic generative model for semi-supervised training of
coarse-grained surrogates and enforcing physical constraints through virtual
observables [3.8073142980733]
This paper provides a flexible, probabilistic framework that accounts for physical structure and information both in the training objectives and in the surrogate model itself.
We advocate a probabilistic model in which equalities that are available from the physics can be introduced as virtual observables and can provide additional information through the likelihood.
arXiv Detail & Related papers (2020-06-02T17:14:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.