Forecasting Lithium-Ion Battery Longevity with Limited Data
Availability: Benchmarking Different Machine Learning Algorithms
- URL: http://arxiv.org/abs/2312.05717v1
- Date: Sun, 10 Dec 2023 00:51:50 GMT
- Title: Forecasting Lithium-Ion Battery Longevity with Limited Data
Availability: Benchmarking Different Machine Learning Algorithms
- Authors: Hudson Hilal and Pramit Saha
- Abstract summary: This work aims to compare the relative performance of different machine learning algorithms, both traditional machine learning and deep learning.
We investigated 14 different machine learning models that were fed handcrafted features based on statistical data.
Deep learning models were observed to perform particularly poorly on raw, limited data.
- Score: 3.4628430044380973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the use of Lithium-ion batteries continues to grow, it becomes
increasingly important to be able to predict their remaining useful life. This
work aims to compare the relative performance of different machine learning
algorithms, both traditional machine learning and deep learning, in order to
determine the best-performing algorithms for battery cycle life prediction
based on minimal data. We investigated 14 different machine learning models
that were fed handcrafted features based on statistical data and split into 3
feature groups for testing. For deep learning models, we tested a variety of
neural network models including different configurations of standard Recurrent
Neural Networks, Gated Recurrent Units, and Long Short Term Memory with and
without attention mechanism. Deep learning models were fed multivariate time
series signals based on the raw data for each battery across the first 100
cycles. Our experiments revealed that the machine learning algorithms on
handcrafted features performed particularly well, resulting in 10-20% average
mean absolute percentage error. The best-performing algorithm was the Random
Forest Regressor, which gave a minimum 9.8% mean absolute percentage error.
Traditional machine learning models excelled due to their capability to
comprehend general data set trends. In comparison, deep learning models were
observed to perform particularly poorly on raw, limited data. Algorithms like
GRU and RNNs that focused on capturing medium-range data dependencies were less
adept at recognizing the gradual, slow trends critical for this task. Our
investigation reveals that implementing machine learning models with
hand-crafted features proves to be more effective than advanced deep learning
models for predicting the remaining useful Lithium-ion battery life with
limited data availability.
Related papers
- Multi-Scale Convolutional LSTM with Transfer Learning for Anomaly Detection in Cellular Networks [1.1432909951914676]
This study introduces a novel approach Multi-Scale Convolutional LSTM with Transfer Learning (TL) to detect anomalies in cellular networks.
The model is initially trained from scratch using a publicly available dataset to learn typical network behavior.
We compare the performance of the model trained from scratch with that of the fine-tuned model using TL.
arXiv Detail & Related papers (2024-09-30T17:51:54Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Enhanced Gaussian Process Dynamical Models with Knowledge Transfer for
Long-term Battery Degradation Forecasting [0.9208007322096533]
Predicting the end-of-life or remaining useful life of batteries in electric vehicles is a critical and challenging problem.
A number of algorithms have incorporated features that are available from data collected by battery management systems.
We develop a highly-accurate method that can overcome this limitation.
arXiv Detail & Related papers (2022-12-03T12:59:51Z) - Benchmarking Learning Efficiency in Deep Reservoir Computing [23.753943709362794]
We introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data.
We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing.
arXiv Detail & Related papers (2022-09-29T08:16:52Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - PROMISSING: Pruning Missing Values in Neural Networks [0.0]
We propose a simple and intuitive yet effective method for pruning missing values (PROMISSING) during learning and inference steps in neural networks.
Our experiments show that PROMISSING results in similar prediction performance compared to various imputation techniques.
arXiv Detail & Related papers (2022-06-03T15:37:27Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Statistical learning for accurate and interpretable battery lifetime
prediction [1.738360170201861]
We develop simple, accurate, and interpretable data-driven models for battery lifetime prediction.
Our approaches can be used both to quickly train models for a new dataset and to benchmark the performance of more advanced machine learning methods.
arXiv Detail & Related papers (2021-01-06T06:05:24Z) - Fast, Accurate, and Simple Models for Tabular Data via Augmented
Distillation [97.42894942391575]
We propose FAST-DAD to distill arbitrarily complex ensemble predictors into individual models like boosted trees, random forests, and deep networks.
Our individual distilled models are over 10x faster and more accurate than ensemble predictors produced by AutoML tools like H2O/AutoSklearn.
arXiv Detail & Related papers (2020-06-25T09:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.