Statistical learning for accurate and interpretable battery lifetime
prediction
- URL: http://arxiv.org/abs/2101.01885v1
- Date: Wed, 6 Jan 2021 06:05:24 GMT
- Title: Statistical learning for accurate and interpretable battery lifetime
prediction
- Authors: Peter M. Attia, Kristen A. Severson, Jeremy D. Witmer
- Abstract summary: We develop simple, accurate, and interpretable data-driven models for battery lifetime prediction.
Our approaches can be used both to quickly train models for a new dataset and to benchmark the performance of more advanced machine learning methods.
- Score: 1.738360170201861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven methods for battery lifetime prediction are attracting increasing
attention for applications in which the degradation mechanisms are poorly
understood and suitable training sets are available. However, while advanced
machine learning and deep learning methods offer high performance with minimal
feature engineering, simpler "statistical learning" methods often achieve
comparable performance, especially for small training sets, while also
providing physical and statistical interpretability. In this work, we use a
previously published dataset to develop simple, accurate, and interpretable
data-driven models for battery lifetime prediction. We first present the
"capacity matrix" concept as a compact representation of battery
electrochemical cycling data, along with a series of feature representations.
We then create a number of univariate and multivariate models, many of which
achieve comparable performance to the highest-performing models previously
published for this dataset. These models also provide insights into the
degradation of these cells. Our approaches can be used both to quickly train
models for a new dataset and to benchmark the performance of more advanced
machine learning methods.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Machine Learning for predicting chaotic systems [0.0]
We show that well-tuned simple methods, as well as untuned baseline methods, often outperform state-of-the-art deep learning models.
These findings underscore the importance of matching prediction methods to data characteristics and available computational resources.
arXiv Detail & Related papers (2024-07-29T16:34:47Z) - Data Shapley in One Training Run [88.59484417202454]
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts.
Existing approaches require re-training models on different data subsets, which is computationally intensive.
This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest.
arXiv Detail & Related papers (2024-06-16T17:09:24Z) - Forecasting Lithium-Ion Battery Longevity with Limited Data
Availability: Benchmarking Different Machine Learning Algorithms [3.4628430044380973]
This work aims to compare the relative performance of different machine learning algorithms, both traditional machine learning and deep learning.
We investigated 14 different machine learning models that were fed handcrafted features based on statistical data.
Deep learning models were observed to perform particularly poorly on raw, limited data.
arXiv Detail & Related papers (2023-12-10T00:51:50Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - EAMDrift: An interpretable self retrain model for time series [0.0]
We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
arXiv Detail & Related papers (2023-05-31T13:25:26Z) - Enhanced Gaussian Process Dynamical Models with Knowledge Transfer for
Long-term Battery Degradation Forecasting [0.9208007322096533]
Predicting the end-of-life or remaining useful life of batteries in electric vehicles is a critical and challenging problem.
A number of algorithms have incorporated features that are available from data collected by battery management systems.
We develop a highly-accurate method that can overcome this limitation.
arXiv Detail & Related papers (2022-12-03T12:59:51Z) - On Measuring the Intrinsic Few-Shot Hardness of Datasets [49.37562545777455]
We show that few-shot hardness may be intrinsic to datasets, for a given pre-trained model.
We propose a simple and lightweight metric called "Spread" that captures the intuition that few-shot learning is made possible.
Our metric better accounts for few-shot hardness compared to existing notions of hardness, and is 8-100x faster to compute.
arXiv Detail & Related papers (2022-11-16T18:53:52Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Battery Model Calibration with Deep Reinforcement Learning [5.004835203025507]
We implement a Reinforcement Learning-based framework for reliably and efficiently inferring calibration parameters of battery models.
The framework enables real-time inference of the computational model parameters in order to compensate the reality-gap from the observations.
arXiv Detail & Related papers (2020-12-07T19:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.