Clapeyron Neural Networks for Single-Species Vapor-Liquid Equilibria
- URL: http://arxiv.org/abs/2602.18313v1
- Date: Fri, 20 Feb 2026 16:11:42 GMT
- Title: Clapeyron Neural Networks for Single-Species Vapor-Liquid Equilibria
- Authors: Jan Pavšek, Alexander Mitsos, Elvis J. Sim, Jan G. Rittig,
- Abstract summary: Machine learning approaches have shown promising results for predicting molecular properties relevant for chemical process design.<n>We propose a thermodynamics-informed ML, incorporating thermodynamic relations into the loss function as regularization term for training.<n>We find improved prediction accuracy of the Clapeyron-GNN compared to the single-task learning setting, and improved approximation of the Clapeyron equation compared to the purely data-driven multi-task learning setting.
- Score: 39.691553958657764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) approaches have shown promising results for predicting molecular properties relevant for chemical process design. However, they are often limited by scarce experimental property data and lack thermodynamic consistency. As such, thermodynamics-informed ML, i.e., incorporating thermodynamic relations into the loss function as regularization term for training, has been proposed. We herein transfer the concept of thermodynamics-informed graph neural networks (GNNs) from the Gibbs-Duhem to the Clapeyron equation, predicting several pure component properties in a multi-task manner, namely: vapor pressure, liquid molar volume, vapor molar volume and enthalpy of vaporization. We find improved prediction accuracy of the Clapeyron-GNN compared to the single-task learning setting, and improved approximation of the Clapeyron equation compared to the purely data-driven multi-task learning setting. In fact, we observe the largest improvement in prediction accuracy for the properties with the lowest availability of data, making our model promising for practical application in data scarce scenarios of chemical engineering practice.
Related papers
- Foundation Models for Discovery and Exploration in Chemical Space [57.97784111110166]
MIST is a family of molecular foundation models trained on large unlabeled datasets.<n>We demonstrate the ability of these models to solve real-world problems across chemical space.
arXiv Detail & Related papers (2025-10-20T17:56:01Z) - A machine-learned expression for the excess Gibbs energy [15.799043135621565]
The excess Gibbs energy plays a central role in chemical engineering and chemistry, providing a basis for modeling the thermodynamic properties of liquid mixtures.<n>In this work, we address this challenge by integrating physical laws as hard constraints within a flexible neural network.<n>The resulting model, HANNA, was trained end-to-end on an extensive experimental dataset for binary mixtures from the Dortmund Data Bank.
arXiv Detail & Related papers (2025-09-08T09:47:03Z) - Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation [50.784080714897776]
Knowledge distillation (KD) is a core component in the training and deployment of modern generative models.<n>We show that KD induces a trade-off between precision and recall in the student model.<n>Our analysis provides a simple and general explanation for the effectiveness of KD in generative modeling.
arXiv Detail & Related papers (2025-05-19T13:39:47Z) - ChemKANs for Combustion Chemistry Modeling and Acceleration [0.0]
ChemKAN is a novel neural network framework for model inference and simulation acceleration for combustion chemistry.<n>ChemKAN's structure augments the generic Kolmogorov Arnold Network Ordinary Differential Equations (KAN-ODEs) with knowledge of the information flow through the relevant kinetic and thermodynamic laws.<n>We benchmark the robustness of ChemKANs to sparse data containing up to 15% added noise, and superfluously large network parameterizations.
arXiv Detail & Related papers (2025-04-17T01:53:28Z) - Bridging Equilibrium and Kinetics Prediction with a Data-Weighted Neural Network Model of Methane Steam Reforming [0.0]
We show a surrogate model capable of unifying both kinetic and equilibrium regimes.<n>An artificial neural network trained on a comprehensive dataset that includes experimental data from kinetic and equilibrium experiments.<n>The network's ability to provide continuous derivatives of its predictions makes it particularly useful for process modeling and optimization.
arXiv Detail & Related papers (2025-04-15T14:55:06Z) - Enhancing the Scalability and Applicability of Kohn-Sham Hamiltonians for Molecular Systems [11.085215676429858]
We create a scalable model for Density Functional Theory calculations with physical accuracy.<n>We show it achieves a reduction in total energy prediction error by a factor of 1347 and an SCF calculation speed-up by a factor of 18%.
arXiv Detail & Related papers (2025-02-26T15:36:25Z) - Pre-trained Molecular Language Models with Random Functional Group Masking [54.900360309677794]
We propose a SMILES-based underlineem Molecular underlineem Language underlineem Model, which randomly masking SMILES subsequences corresponding to specific molecular atoms.
This technique aims to compel the model to better infer molecular structures and properties, thus enhancing its predictive capabilities.
arXiv Detail & Related papers (2024-11-03T01:56:15Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - SPT-NRTL: A physics-guided machine learning model to predict
thermodynamically consistent activity coefficients [0.12352483741564477]
We introduce SPT-NRTL, a machine learning model to predict thermodynamically consistent activity coefficients.
SPT-NRTL achieves higher accuracy than UNIFAC in the prediction of activity coefficients across all functional groups.
arXiv Detail & Related papers (2022-09-09T06:21:05Z) - Graph Neural Networks for Temperature-Dependent Activity Coefficient
Prediction of Solutes in Ionic Liquids [58.720142291102135]
We present a GNN to predict temperature-dependent infinite dilution ACs of solutes in ILs.
We train the GNN on a database including more than 40,000 AC values and compare it to a state-of-the-art MCM.
The GNN and MCM achieve similar high prediction performance, with the GNN additionally enabling high-quality predictions for ACs of solutions that contain ILs and solutes not considered during training.
arXiv Detail & Related papers (2022-06-23T15:27:29Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - Parsimonious neural networks learn interpretable physical laws [77.34726150561087]
We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
arXiv Detail & Related papers (2020-05-08T16:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.