Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery
- URL: http://arxiv.org/abs/2203.01985v1
- Date: Thu, 3 Mar 2022 19:56:24 GMT
- Title: Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery
- Authors: QiZhi He, Yucheng Fu, Panos Stinis, Alexandre Tartakovsky
- Abstract summary: We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical modeling and simulation have become indispensable tools for
advancing a comprehensive understanding of the underlying mechanisms and
cost-effective process optimization and control of flow batteries. In this
study, we propose an enhanced version of the physics-constrained deep neural
network (PCDNN) approach [1] to provide high-accuracy voltage predictions in
the vanadium redox flow batteries (VRFBs). The purpose of the PCDNN approach is
to enforce the physics-based zero-dimensional (0D) VRFB model in a neural
network to assure model generalization for various battery operation
conditions. Limited by the simplifications of the 0D model, the PCDNN cannot
capture sharp voltage changes in the extreme SOC regions. To improve the
accuracy of voltage prediction at extreme ranges, we introduce a second
(enhanced) DNN to mitigate the prediction errors carried from the 0D model
itself and call the resulting approach enhanced PCDNN (ePCDNN). By comparing
the model prediction with experimental data, we demonstrate that the ePCDNN
approach can accurately capture the voltage response throughout the
charge--discharge cycle, including the tail region of the voltage discharge
curve. Compared to the standard PCDNN, the prediction accuracy of the ePCDNN is
significantly improved. The loss function for training the ePCDNN is designed
to be flexible by adjusting the weights of the physics-constrained DNN and the
enhanced DNN. This allows the ePCDNN framework to be transferable to battery
systems with variable physical model fidelity.
Related papers
- Online model error correction with neural networks: application to the
Integrated Forecasting System [0.27930367518472443]
We develop a model error correction for the European Centre for Medium-Range Weather Forecasts using a neural network.
The network is pre-trained offline using a large dataset of operational analyses and analysis increments.
It is then integrated into the IFS within the Object-Oriented Prediction System (OOPS) so as to be used in data assimilation and forecast experiments.
arXiv Detail & Related papers (2024-03-06T13:36:31Z) - Flexible Parallel Neural Network Architecture Model for Early Prediction
of Lithium Battery Life [0.8530934084017966]
The early prediction of battery life (EPBL) is vital for enhancing the efficiency and extending the lifespan of lithium batteries.
Traditional models with fixed architectures often encounter underfitting or overfitting issues due to the diverse data distributions in different EPBL tasks.
An interpretable deep learning model of flexible parallel neural network (FPNN) is proposed, which includes an InceptionBlock, a 3D convolutional neural network (CNN), a 2D CNN, and a dual-stream network.
The proposed model effectively extracts electrochemical features from video-like formatted data using the 3D CNN and achieves advanced multi-scale feature abstraction through
arXiv Detail & Related papers (2024-01-29T12:20:17Z) - Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks [0.1752969190744922]
Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area.
We propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization.
arXiv Detail & Related papers (2023-12-20T06:10:27Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Physics-informed CoKriging model of a redox flow battery [68.8204255655161]
Redox flow batteries (RFBs) offer the capability to store large amounts of energy cheaply and efficiently.
There is a need for fast and accurate models of the charge-discharge curve of a RFB to potentially improve the battery capacity and performance.
We develop a multifidelity model for predicting the charge-discharge curve of a RFB.
arXiv Detail & Related papers (2021-06-17T00:49:55Z) - Alleviation of Temperature Variation Induced Accuracy Degradation in
Ferroelectric FinFET Based Neural Network [0.0]
We adopt a pre-trained artificial neural network with 96.4% inference accuracy on the MNIST dataset as the baseline.
We observe a significant inference accuracy degradation in the analog neural network at 233 K for a NN trained at 300 K.
We deploy binary neural networks with "read voltage" optimization to ensure immunity of NN to accuracy degradation under temperature variation.
arXiv Detail & Related papers (2021-03-03T16:06:03Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.