Fast and Generalizable parameter-embedded Neural Operators for Lithium-Ion Battery Simulation
- URL: http://arxiv.org/abs/2508.08087v1
- Date: Mon, 11 Aug 2025 15:31:23 GMT
- Title: Fast and Generalizable parameter-embedded Neural Operators for Lithium-Ion Battery Simulation
- Authors: Amir Ali Panahi, Daniel Luder, Billy Wu, Gregory Offer, Dirk Uwe Sauer, Weihan Li,
- Abstract summary: We benchmark three operator-learning surrogates for the Single Particle Model (SPM): Deep Operator Networks (DeepONets), Fourier Neural Operators (FNOs) and a newly proposed parameter-embedded Fourier Neural Operator (PE-FNO)<n>DeepONet accurately replicates constant-current behaviour but struggles with more dynamic loads. FNO maintains mesh invariance and keeps concentration errors below 1 %, with voltage mean-absolute errors under 1.7 mV across all load types. PE-FNO executes approximately 200 times faster than a 16-thread SPM solver.
- Score: 1.099532646524593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable digital twins of lithium-ion batteries must achieve high physical fidelity with sub-millisecond speed. In this work, we benchmark three operator-learning surrogates for the Single Particle Model (SPM): Deep Operator Networks (DeepONets), Fourier Neural Operators (FNOs) and a newly proposed parameter-embedded Fourier Neural Operator (PE-FNO), which conditions each spectral layer on particle radius and solid-phase diffusivity. Models are trained on simulated trajectories spanning four current families (constant, triangular, pulse-train, and Gaussian-random-field) and a full range of State-of-Charge (SOC) (0 % to 100 %). DeepONet accurately replicates constant-current behaviour but struggles with more dynamic loads. The basic FNO maintains mesh invariance and keeps concentration errors below 1 %, with voltage mean-absolute errors under 1.7 mV across all load types. Introducing parameter embedding marginally increases error, but enables generalisation to varying radii and diffusivities. PE-FNO executes approximately 200 times faster than a 16-thread SPM solver. Consequently, PE-FNO's capabilities in inverse tasks are explored in a parameter estimation task with Bayesian optimisation, recovering anode and cathode diffusivities with 1.14 % and 8.4 % mean absolute percentage error, respectively, and 0.5918 percentage points higher error in comparison with classical methods. These results pave the way for neural operators to meet the accuracy, speed and parametric flexibility demands of real-time battery management, design-of-experiments and large-scale inference. PE-FNO outperforms conventional neural surrogates, offering a practical path towards high-speed and high-fidelity electrochemical digital twins.
Related papers
- Surrogate Modeling for Neutron Transport: A Neural Operator Approach [7.289597749952393]
This work introduces a neural operator based surrogate modeling framework for neutron transport computation.<n>Two architectures, the Deep Operator Network (DeepONet) and the Fourier Neural Operator (FNO), were trained for fixed source problems.<n>Both neural operator solvers reproduced reference eigenvalues with deviations up to 135 pcm for DeepONet and 112 pcm for FNO.
arXiv Detail & Related papers (2026-02-07T00:56:07Z) - Physics Enhanced Deep Surrogates for the Phonon Boltzmann Transport Equation [0.0]
Physics-Enhanced Deep Surrogate (PEDS)<n>Network learns geometry-dependent corrections and a mixing coefficient that interpolates between macroscopic and nano-scale behavior.<n>PEDS reduces training-data requirements by up to 70% compared with purely data-driven baselines.
arXiv Detail & Related papers (2025-11-25T16:25:24Z) - Physics-informed Neural Operator Learning for Nonlinear Grad-Shafranov Equation [18.564353542797946]
In magnetic confinement nuclear fusion, rapid and accurate solution of the Grad-Shafranov equation (GSE) is essential for real-time plasma control and analysis.<n>Traditional numerical solvers achieve high precision but are computationally prohibitive, while data-driven surrogates infer quickly but fail to enforce physical laws and generalize poorly beyond training distributions.<n>We present a Physics-Informed Neural Operator (PINO) that directly learns the GSE solution operator, mapping shape parameters of last closed flux surface to equilibrium solutions for realistic nonlinear current profiles.
arXiv Detail & Related papers (2025-11-24T13:46:38Z) - Fixed Point Neural Acceleration and Inverse Surrogate Model for Battery Parameter Identification [5.094264803596951]
We propose a deep learning-based framework for parameter identification of electrochemical battery models.<n>The proposed framework combines a neural surrogate model of the single particle model with electrolyte (NeuralSPMe) and a deep learning-based fixed-point iteration method.
arXiv Detail & Related papers (2025-10-28T07:20:38Z) - Accurate Ab-initio Neural-network Solutions to Large-Scale Electronic Structure Problems [52.19558333652367]
We present finite-range embeddings (FiRE) for accurate large-scale ab-initio electronic structure calculations.<n>FiRE reduces the complexity of neural-network variational Monte Carlo (NN-VMC) by $sim ntextel$, the number of electrons.<n>We validate our method's accuracy on various challenging systems, including biochemical compounds and organometallic compounds.
arXiv Detail & Related papers (2025-04-08T14:28:54Z) - A Multi-Grained Symmetric Differential Equation Model for Learning Protein-Ligand Binding Dynamics [73.35846234413611]
In drug discovery, molecular dynamics (MD) simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning (ML) surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding dynamics.
We demonstrate the efficiency and effectiveness of NeuralMD, achieving over 1K$times$ speedup compared to standard numerical MD simulations.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks [0.1752969190744922]
Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area.
We propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization.
arXiv Detail & Related papers (2023-12-20T06:10:27Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Physics-informed CoKriging model of a redox flow battery [68.8204255655161]
Redox flow batteries (RFBs) offer the capability to store large amounts of energy cheaply and efficiently.
There is a need for fast and accurate models of the charge-discharge curve of a RFB to potentially improve the battery capacity and performance.
We develop a multifidelity model for predicting the charge-discharge curve of a RFB.
arXiv Detail & Related papers (2021-06-17T00:49:55Z) - Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection [0.0]
We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
arXiv Detail & Related papers (2021-03-05T09:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.