Graph Neural Networks embedded into Margules model for vapor-liquid equilibria prediction
- URL: http://arxiv.org/abs/2502.18998v1
- Date: Wed, 26 Feb 2025 10:03:47 GMT
- Title: Graph Neural Networks embedded into Margules model for vapor-liquid equilibria prediction
- Authors: Edgar Ivan Sanchez Medina, Kai Sundmacher,
- Abstract summary: The performance of Graph Neural Networks (GNNs) embedded into a relatively simple excess Gibbs energy model is analyzed.<n>The findings establish a baseline for the predictive accuracy that simple excess Gibbs energy models combined with GNNs trained solely on infinite dilution data can achieve.
- Score: 0.8287206589886881
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Predictive thermodynamic models are crucial for the early stages of product and process design. In this paper the performance of Graph Neural Networks (GNNs) embedded into a relatively simple excess Gibbs energy model, the extended Margules model, for predicting vapor-liquid equilibrium is analyzed. By comparing its performance against the established UNIFAC-Dortmund model it has been shown that GNNs embedded in Margules achieves an overall lower accuracy. However, higher accuracy is observed in the case of various types of binary mixtures. Moreover, since group contribution methods, like UNIFAC, are limited due to feasibility of molecular fragmentation or availability of parameters, the GNN in Margules model offers an alternative for VLE estimation. The findings establish a baseline for the predictive accuracy that simple excess Gibbs energy models combined with GNNs trained solely on infinite dilution data can achieve.
Related papers
- Adaptive Fuzzy C-Means with Graph Embedding [84.47075244116782]
Fuzzy clustering algorithms can be roughly categorized into two main groups: Fuzzy C-Means (FCM) based methods and mixture model based methods.
We propose a novel FCM based clustering model that is capable of automatically learning an appropriate membership degree hyper- parameter value.
arXiv Detail & Related papers (2024-05-22T08:15:50Z) - Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural
Networks [0.3914676152740142]
We develop a graph-based neural model for capturing the impact of faults on CO$$ plume migration.
We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with faults.
This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
arXiv Detail & Related papers (2023-06-16T06:47:47Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Denoise Pretraining on Nonequilibrium Molecules for Accurate and
Transferable Neural Potentials [8.048439531116367]
We propose denoise pretraining on nonequilibrium molecular conformations to achieve more accurate and transferable GNN potential predictions.
Our models pretrained on small molecules demonstrate remarkable transferability, improving performance when fine-tuned on diverse molecular systems.
arXiv Detail & Related papers (2023-03-03T21:15:22Z) - Gibbs-Helmholtz Graph Neural Network: capturing the temperature
dependency of activity coefficients at infinite dilution [1.290382979353427]
We develop the Gibbs-Helmholtz Graph Neural Network (GH-GNN) model for predicting $ln gamma_ijinfty$ of molecular systems at different temperatures.
We analyze the performance of GH-GNN for continuous and discrete inter/extrapolation and give indications for the model's applicability domain and expected accuracy.
arXiv Detail & Related papers (2022-12-02T14:25:58Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - A Rigorous Link Between Self-Organizing Maps and Gaussian Mixture Models [78.6363825307044]
This work presents a mathematical treatment of the relation between Self-Organizing Maps (SOMs) and Gaussian Mixture Models (GMMs)
We show that energy-based SOM models can be interpreted as performing gradient descent.
This link allows to treat SOMs as generative probabilistic models, giving a formal justification for using SOMs to detect outliers, or for sampling.
arXiv Detail & Related papers (2020-09-24T14:09:04Z) - Learning CHARME models with neural networks [1.5362025549031046]
We consider a model called CHARME (Conditional Heteroscedastic Autoregressive Mixture of Experts)
As an application, we develop a learning theory for the NN-based autoregressive functions of the model.
arXiv Detail & Related papers (2020-02-08T21:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.