Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction
- URL: http://arxiv.org/abs/2306.07937v2
- Date: Thu, 14 Sep 2023 08:23:11 GMT
- Title: Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction
- Authors: Jan G. Rittig, Kobi C. Felton, Alexei A. Lapkin, Alexander Mitsos
- Abstract summary: We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
- Score: 45.84205238554709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Gibbs-Duhem-informed neural networks for the prediction of binary
activity coefficients at varying compositions. That is, we include the
Gibbs-Duhem equation explicitly in the loss function for training neural
networks, which is straightforward in standard machine learning (ML) frameworks
enabling automatic differentiation. In contrast to recent hybrid ML approaches,
our approach does not rely on embedding a specific thermodynamic model inside
the neural network and corresponding prediction limitations. Rather,
Gibbs-Duhem consistency serves as regularization, with the flexibility of ML
models being preserved. Our results show increased thermodynamic consistency
and generalization capabilities for activity coefficient predictions by
Gibbs-Duhem-informed graph neural networks and matrix completion methods. We
also find that the model architecture, particularly the activation function,
can have a strong influence on the prediction quality. The approach can be
easily extended to account for other thermodynamic consistency conditions.
Related papers
- HANNA: Hard-constraint Neural Network for Consistent Activity Coefficient Prediction [16.024570580558954]
We present the first hard-constraint neural network for predicting activity coefficients (HANNA)
HANNA is a thermodynamic mixture property that is the basis for many applications in science and engineering.
The model was trained and evaluated on 317,421 data points for activity coefficients in binary mixtures from the Dortmund Data Bank.
arXiv Detail & Related papers (2024-07-25T13:05:00Z) - Thermodynamics-Consistent Graph Neural Networks [50.0791489606211]
We propose excess Gibbs free energy graph neural networks (GE-GNNs) for predicting composition-dependent activity coefficients of binary mixtures.
The GE-GNN architecture ensures thermodynamic consistency by predicting the molar excess Gibbs free energy.
We demonstrate high accuracy and thermodynamic consistency of the activity coefficient predictions.
arXiv Detail & Related papers (2024-07-08T06:58:56Z) - CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Efficient Implementation of Non-linear Flow Law Using Neural Network
into the Abaqus Explicit FEM code [0.0]
An Artificial Neural Network (ANN) model is used in a finite element formulation to define the flow law of a metallic material.
The results obtained show a very high capability of the ANN to replace the analytical formulation of a Johnson-Cook behavior law in a finite element code.
arXiv Detail & Related papers (2022-09-07T14:37:09Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Sobolev training of thermodynamic-informed neural networks for smoothed
elasto-plasticity models with level set hardening [0.0]
We introduce a deep learning framework designed to train smoothed elastoplasticity models with interpretable components.
By recasting the yield function as an evolving level set, we introduce a machine learning approach to predict the solutions of the Hamilton-Jacobi equation.
arXiv Detail & Related papers (2020-10-15T22:43:32Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.