Statistical learning for accurate and interpretable battery lifetime
prediction
- URL: http://arxiv.org/abs/2101.01885v1
- Date: Wed, 6 Jan 2021 06:05:24 GMT
- Title: Statistical learning for accurate and interpretable battery lifetime
prediction
- Authors: Peter M. Attia, Kristen A. Severson, Jeremy D. Witmer
- Abstract summary: We develop simple, accurate, and interpretable data-driven models for battery lifetime prediction.
Our approaches can be used both to quickly train models for a new dataset and to benchmark the performance of more advanced machine learning methods.
- Score: 1.738360170201861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven methods for battery lifetime prediction are attracting increasing
attention for applications in which the degradation mechanisms are poorly
understood and suitable training sets are available. However, while advanced
machine learning and deep learning methods offer high performance with minimal
feature engineering, simpler "statistical learning" methods often achieve
comparable performance, especially for small training sets, while also
providing physical and statistical interpretability. In this work, we use a
previously published dataset to develop simple, accurate, and interpretable
data-driven models for battery lifetime prediction. We first present the
"capacity matrix" concept as a compact representation of battery
electrochemical cycling data, along with a series of feature representations.
We then create a number of univariate and multivariate models, many of which
achieve comparable performance to the highest-performing models previously
published for this dataset. These models also provide insights into the
degradation of these cells. Our approaches can be used both to quickly train
models for a new dataset and to benchmark the performance of more advanced
machine learning methods.
Related papers
- Meta-Statistical Learning: Supervised Learning of Statistical Inference [59.463430294611626]
This work demonstrates that the tools and principles driving the success of large language models (LLMs) can be repurposed to tackle distribution-level tasks.
We propose meta-statistical learning, a framework inspired by multi-instance learning that reformulates statistical inference tasks as supervised learning problems.
arXiv Detail & Related papers (2025-02-17T18:04:39Z) - Data-driven tool wear prediction in milling, based on a process-integrated single-sensor approach [1.6574413179773764]
This study explores data-driven methods, in particular deep learning, for tool wear prediction.
The study evaluates several machine learning models, including convolutional neural networks (CNN), long short-term memory networks (LSTM), support vector machines (SVM) and decision trees.
The ConvNeXt model has an exceptional performance, achieving a 99.1% accuracy in identifying tool wear using data from only four milling tools operated until they are worn.
arXiv Detail & Related papers (2024-12-27T23:10:32Z) - Generative Modeling and Data Augmentation for Power System Production Simulation [0.0]
This paper proposes a generative model-assisted approach for load forecasting under small sample scenarios.
The expanded dataset significantly reduces forecasting errors compared to the original dataset.
The diffusion model outperforms the generative adversarial model by achieving about 200 times smaller errors.
arXiv Detail & Related papers (2024-12-10T12:38:47Z) - Data-driven development of cycle prediction models for lithium metal batteries using multi modal mining [1.2748196295556375]
We introduce a novel multi modal data-driven approach employing an Automatic Battery data Collector (ABC)
This platform enables state-of-the-art accurate extraction of battery material data and cyclability performance metrics.
From the database derived through the ABC platform, we developed machine learning models that can accurately predict the capacity and stability of lithium metal batteries.
arXiv Detail & Related papers (2024-11-26T17:37:12Z) - Data Shapley in One Training Run [88.59484417202454]
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts.
Existing approaches require re-training models on different data subsets, which is computationally intensive.
This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest.
arXiv Detail & Related papers (2024-06-16T17:09:24Z) - Forecasting Lithium-Ion Battery Longevity with Limited Data
Availability: Benchmarking Different Machine Learning Algorithms [3.4628430044380973]
This work aims to compare the relative performance of different machine learning algorithms, both traditional machine learning and deep learning.
We investigated 14 different machine learning models that were fed handcrafted features based on statistical data.
Deep learning models were observed to perform particularly poorly on raw, limited data.
arXiv Detail & Related papers (2023-12-10T00:51:50Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - On Measuring the Intrinsic Few-Shot Hardness of Datasets [49.37562545777455]
We show that few-shot hardness may be intrinsic to datasets, for a given pre-trained model.
We propose a simple and lightweight metric called "Spread" that captures the intuition that few-shot learning is made possible.
Our metric better accounts for few-shot hardness compared to existing notions of hardness, and is 8-100x faster to compute.
arXiv Detail & Related papers (2022-11-16T18:53:52Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Battery Model Calibration with Deep Reinforcement Learning [5.004835203025507]
We implement a Reinforcement Learning-based framework for reliably and efficiently inferring calibration parameters of battery models.
The framework enables real-time inference of the computational model parameters in order to compensate the reality-gap from the observations.
arXiv Detail & Related papers (2020-12-07T19:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.