Flexible Parallel Neural Network Architecture Model for Early Prediction
of Lithium Battery Life
- URL: http://arxiv.org/abs/2401.16102v1
- Date: Mon, 29 Jan 2024 12:20:17 GMT
- Title: Flexible Parallel Neural Network Architecture Model for Early Prediction
of Lithium Battery Life
- Authors: Lidang Jiang, Zhuoxiang Li, Changyan Hu, Qingsong Huang, Ge He
- Abstract summary: The early prediction of battery life (EPBL) is vital for enhancing the efficiency and extending the lifespan of lithium batteries.
Traditional models with fixed architectures often encounter underfitting or overfitting issues due to the diverse data distributions in different EPBL tasks.
An interpretable deep learning model of flexible parallel neural network (FPNN) is proposed, which includes an InceptionBlock, a 3D convolutional neural network (CNN), a 2D CNN, and a dual-stream network.
The proposed model effectively extracts electrochemical features from video-like formatted data using the 3D CNN and achieves advanced multi-scale feature abstraction through
- Score: 0.8530934084017966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The early prediction of battery life (EPBL) is vital for enhancing the
efficiency and extending the lifespan of lithium batteries. Traditional models
with fixed architectures often encounter underfitting or overfitting issues due
to the diverse data distributions in different EPBL tasks. An interpretable
deep learning model of flexible parallel neural network (FPNN) is proposed,
which includes an InceptionBlock, a 3D convolutional neural network (CNN), a 2D
CNN, and a dual-stream network. The proposed model effectively extracts
electrochemical features from video-like formatted data using the 3D CNN and
achieves advanced multi-scale feature abstraction through the InceptionBlock.
The FPNN can adaptively adjust the number of InceptionBlocks to flexibly handle
tasks of varying complexity in EPBL. The test on the MIT dataset shows that the
FPNN model achieves outstanding predictive accuracy in EPBL tasks, with MAPEs
of 2.47%, 1.29%, 1.08%, and 0.88% when the input cyclic data volumes are 10,
20, 30, and 40, respectively. The interpretability of the FPNN is mainly
reflected in its flexible unit structure and parameter selection: its diverse
branching structure enables the model to capture features at different scales,
thus allowing the machine to learn informative features. The approach presented
herein provides an accurate, adaptable, and comprehensible solution for early
life prediction of lithium batteries, opening new possibilities in the field of
battery health monitoring.
Related papers
- Power Flow Analysis Using Deep Neural Networks in Three-Phase Unbalanced
Smart Distribution Grids [0.7037008937757394]
Three deep neural networks (DNNs) are proposed in this paper to predict power flow (PF) solutions.
The training and testing data are generated through the OpenDSS-MATLAB COM interface.
The novelty of the proposed methodology is that the models can accurately predict the PF solutions for the unbalanced distribution grids.
arXiv Detail & Related papers (2024-01-15T04:43:37Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - Physics-Informed Neural Networks for Prognostics and Health Management
of Lithium-Ion Batteries [8.929862063890974]
We propose a model fusion scheme based on Physics-Informed Neural Network (PINN)
It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion batteries.
The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework.
arXiv Detail & Related papers (2023-01-02T17:51:23Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Opportunistic Emulation of Computationally Expensive Simulations via
Deep Learning [9.13837510233406]
We investigate the use of deep neural networks for opportunistic model emulation of APSIM models.
We focus on emulating four important outputs of the APSIM model: runoff, soil_loss, DINrunoff, Nleached.
arXiv Detail & Related papers (2021-08-25T05:57:16Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.