SPIN-ODE: Stiff Physics-Informed Neural ODE for Chemical Reaction Rate Estimation
- URL: http://arxiv.org/abs/2505.05625v2
- Date: Wed, 18 Jun 2025 18:09:35 GMT
- Title: SPIN-ODE: Stiff Physics-Informed Neural ODE for Chemical Reaction Rate Estimation
- Authors: Wenqing Peng, Zhi-Song Liu, Michael Boy,
- Abstract summary: Estimating rate coefficients from complex chemical reactions is essential for advancing detailed chemistry.<n>We propose a Stiff Physics-Informed Neural ODE framework (SPIN-ODE) for chemical reaction modelling.<n>Our method introduces a three-stage optimisation process: first, a latent neural ODE learns the trajectory between chemical concentrations and their time derivatives; second, an explicit Chemical Reaction Neural Network (CRNN) extracts the underlying rate coefficients based on the learned dynamics; and third, fine-tune CRNN using a neural ODE solver to further improve rate coefficient estimation.
- Score: 6.84242299603086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating rate coefficients from complex chemical reactions is essential for advancing detailed chemistry. However, the stiffness inherent in real-world atmospheric chemistry systems poses severe challenges, leading to training instability and poor convergence that hinder effective rate coefficient estimation using learning-based approaches. To address this, we propose a Stiff Physics-Informed Neural ODE framework (SPIN-ODE) for chemical reaction modelling. Our method introduces a three-stage optimisation process: first, a latent neural ODE learns the continuous and differentiable trajectory between chemical concentrations and their time derivatives; second, an explicit Chemical Reaction Neural Network (CRNN) extracts the underlying rate coefficients based on the learned dynamics; and third, fine-tune CRNN using a neural ODE solver to further improve rate coefficient estimation. Extensive experiments on both synthetic and newly proposed real-world datasets validate the effectiveness and robustness of our approach. As the first work on stiff Neural ODEs for chemical rate coefficient discovery, our study opens promising directions for integrating neural networks with detailed chemistry.
Related papers
- Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - NOBLE -- Neural Operator with Biologically-informed Latent Embeddings to Capture Experimental Variability in Biological Neuron Models [68.89389652724378]
NOBLE is a neural operator framework that learns a mapping from a continuous frequency-modulated embedding of interpretable neuron features to the somatic voltage response induced by current injection.<n>It predicts distributions of neural dynamics accounting for the intrinsic experimental variability.<n>NOBLE is the first scaled-up deep learning framework validated on real experimental data.
arXiv Detail & Related papers (2025-06-05T01:01:18Z) - ChemKANs for Combustion Chemistry Modeling and Acceleration [0.0]
Machine learning techniques have been proposed to streamline chemical kinetic model inference.<n>ChemKAN can accurately represent hydrogen combustion chemistry, providing a 2x acceleration over the detailed chemistry in a solver.<n>These demonstrations indicate potential for ChemKANs in combustion physics and chemical kinetics.
arXiv Detail & Related papers (2025-04-17T01:53:28Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Neural Network Emulator for Atmospheric Chemical ODE [6.84242299603086]
We propose a Neural Network Emulator for fast chemical concentration modeling.<n>To extract the hidden correlations between initial states and future time evolution, we propose ChemNNE.<n>Our approach achieves state-of-the-art performance in modeling accuracy and computational speed.
arXiv Detail & Related papers (2024-08-03T17:43:10Z) - Speeding up astrochemical reaction networks with autoencoders and neural
ODEs [0.0]
In astrophysics, solving complex chemical reaction networks is essential but computationally demanding.
Traditional approaches for reducing computational load are often specialized to specific chemical networks and require expert knowledge.
This paper introduces a machine learning-based solution employing autoencoders for dimensionality reduction and a latent space neural ODE solver to accelerate astrochemical reaction network computations.
arXiv Detail & Related papers (2023-12-10T22:04:18Z) - A Posteriori Evaluation of a Physics-Constrained Neural Ordinary
Differential Equations Approach Coupled with CFD Solver for Modeling Stiff
Chemical Kinetics [4.125745341349071]
We extend the NeuralODE framework for stiff chemical kinetics by incorporating mass conservation constraints directly into the loss function during training.
This ensures that the total mass and the elemental mass are conserved, a critical requirement for reliable downstream integration with CFD solvers.
arXiv Detail & Related papers (2023-11-22T22:40:49Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Kinetics-Informed Neural Networks [0.0]
We use feed-forward artificial neural networks as basis functions for the construction of surrogate models to solve ordinary differential equations.
We show that the simultaneous training of neural nets and kinetic model parameters in a regularized multiobjective optimization setting leads to the solution of the inverse problem.
This surrogate approach to inverse kinetic ODEs can assist in the elucidation of reaction mechanisms based on transient data.
arXiv Detail & Related papers (2020-11-30T00:07:09Z) - Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search [83.22850633478302]
Retrosynthetic planning identifies a series of reactions that can lead to the synthesis of a target product.
Existing methods either require expensive return estimation by rollout with high variance, or optimize for search speed rather than the quality.
We propose Retro*, a neural-based A*-like algorithm that finds high-quality synthetic routes efficiently.
arXiv Detail & Related papers (2020-06-29T05:53:33Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.