Generalizable Prediction Model of Molten Salt Mixture Density with Chemistry-Informed Transfer Learning
- URL: http://arxiv.org/abs/2410.15120v1
- Date: Sat, 19 Oct 2024 14:28:46 GMT
- Title: Generalizable Prediction Model of Molten Salt Mixture Density with Chemistry-Informed Transfer Learning
- Authors: Julian Barra, Shayan Shahbazi, Anthony Birri, Rajni Chahal, Ibrahim Isah, Muhammad Nouman Anwar, Tyler Starkus, Prasanna Balaprakash, Stephen Lam,
- Abstract summary: Optimally designing molten salt applications requires knowledge of their thermophysical properties.
A transfer learning approach using deep neural networks (DNNs) is proposed, combining Redlich-Kister models, experimental data, and ab initio properties.
The approach predicts molten salt density with high accuracy ($r2$ > 0.99, MAPE 1%), outperforming the alternatives.
- Score: 2.251726366940184
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Optimally designing molten salt applications requires knowledge of their thermophysical properties, but existing databases are incomplete, and experiments are challenging. Ideal mixing and Redlich-Kister models are computationally cheap but lack either accuracy or generality. To address this, a transfer learning approach using deep neural networks (DNNs) is proposed, combining Redlich-Kister models, experimental data, and ab initio properties. The approach predicts molten salt density with high accuracy ($r^{2}$ > 0.99, MAPE < 1%), outperforming the alternatives.
Related papers
- Foundation Models for Discovery and Exploration in Chemical Space [57.97784111110166]
MIST is a family of molecular foundation models trained on large unlabeled datasets.<n>We demonstrate the ability of these models to solve real-world problems across chemical space.
arXiv Detail & Related papers (2025-10-20T17:56:01Z) - Property prediction for ionic liquids without prior structural knowledge using limited experimental data: A data-driven neural recommender system leveraging transfer learning [0.34410212782758043]
Ionic liquids (ILs) have emerged as versatile replacements for traditional solvents.<n> accurately predicting key thermophysical properties remains challenging due to the vast chemical design space.<n>We present a data-driven transfer learning framework that enables reliable property prediction for ILs using sparse experimental datasets.
arXiv Detail & Related papers (2025-09-12T14:13:31Z) - Bridging Equilibrium and Kinetics Prediction with a Data-Weighted Neural Network Model of Methane Steam Reforming [0.0]
We show a surrogate model capable of unifying both kinetic and equilibrium regimes.<n>An artificial neural network trained on a comprehensive dataset that includes experimental data from kinetic and equilibrium experiments.<n>The network's ability to provide continuous derivatives of its predictions makes it particularly useful for process modeling and optimization.
arXiv Detail & Related papers (2025-04-15T14:55:06Z) - Modeling of Core Loss Based on Machine Learning and Deep Learning [0.0]
This article proposes a Mix Neural Network (MNN) based on CNN-FCNN for predicting magnetic loss of different materials.
It is found that a single model is sufficient to make predictions for at least four different materials under varying temperatures, frequencies, and waveforms.
A hybrid model combining MNN and XGBoost was proposed, which predicted through weighting and found that the accuracy could continue to improve.
arXiv Detail & Related papers (2025-02-08T08:07:58Z) - Predicting ionic conductivity in solids from the machine-learned potential energy landscape [68.25662704255433]
Superionic materials are essential for advancing solid-state batteries, which offer improved energy density and safety.
Conventional computational methods for identifying such materials are resource-intensive and not easily scalable.
We propose an approach for the quick and reliable evaluation of ionic conductivity through the analysis of a universal interatomic potential.
arXiv Detail & Related papers (2024-11-11T09:01:36Z) - chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - Accurate machine learning force fields via experimental and simulation
data fusion [0.0]
Machine Learning (ML)-based force fields are attracting ever-increasing interest due to their capacity to span scales of classical interatomic potentials at quantum-level accuracy.
Here we leverage both Density Functional Theory (DFT) calculations and experimentally measured mechanical properties and lattice parameters to train an ML potential of titanium.
We demonstrate that the fused data learning strategy can concurrently satisfy all target objectives, thus resulting in a molecular model of higher accuracy compared to the models trained with a single source data.
arXiv Detail & Related papers (2023-08-17T18:22:19Z) - Accurate melting point prediction through autonomous physics-informed
learning [52.217497897835344]
We present an algorithm for computing melting points by autonomously learning from coexistence simulations in the NPT ensemble.
We demonstrate how incorporating physical models of the solid-liquid coexistence evolution enhances the algorithm's accuracy and enables optimal decision-making.
arXiv Detail & Related papers (2023-06-23T07:53:09Z) - Electronic-structure properties from atom-centered predictions of the
electron density [0.0]
electron density of a molecule or material has recently received major attention as a target quantity of machine-learning models.
We propose a gradient-based approach to minimize the loss function of the regression problem in an optimized and highly sparse feature space.
We show that starting from the predicted density a single Kohn-Sham diagonalization step can be performed to access total energy components that carry an error of just 0.1 meV/atom.
arXiv Detail & Related papers (2022-06-28T15:35:55Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Learning the exchange-correlation functional from nature with fully
differentiable density functional theory [0.0]
We train a neural network to replace the exchange-correlation functional within a fully-differentiable three-dimensional Kohn-Sham density functional theory framework.
Our trained exchange-correlation network provided improved prediction of atomization and ionization energies across a collection of 110 molecules.
arXiv Detail & Related papers (2021-02-08T14:25:10Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.