Robust Data-Driven Error Compensation for a Battery Model
- URL: http://arxiv.org/abs/2012.15686v1
- Date: Thu, 31 Dec 2020 16:11:36 GMT
- Title: Robust Data-Driven Error Compensation for a Battery Model
- Authors: Philipp Gesner, Frank Kirschbaum, Richard Jakobi, Bernard B\"aker
- Abstract summary: Today's massively collected battery data is not yet used for more accurate and reliable simulations.
A data-driven error model is introduced enhancing an existing physically motivated model.
A neural network compensates the existing dynamic error and is further limited based on a description of the underlying data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: - This work has been submitted to IFAC for possible publication - Models of
traction batteries are an essential tool throughout the development of
automotive drivetrains. Surprisingly, today's massively collected battery data
is not yet used for more accurate and reliable simulations. Primarily, the
non-uniform excitation during regular battery operations prevent a consequent
utilization of such measurements. Hence, there is a need for methods which
enable robust models based on large datasets. For that reason, a data-driven
error model is introduced enhancing an existing physically motivated model. A
neural network compensates the existing dynamic error and is further limited
based on a description of the underlying data. This paper tries to verify the
effectiveness and robustness of the general setup and additionally evaluates a
one-class support vector machine as the proposed model for the training data
distribution. Based on a five datasets it is shown, that gradually limiting the
data-driven error compensation outside the boundary leads to a similar
improvement and an increased overall robustness.
Related papers
- Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data [49.73114504515852]
We show that replacing the original real data by each generation's synthetic data does indeed tend towards model collapse.
We demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse.
arXiv Detail & Related papers (2024-04-01T18:31:24Z) - SubjectDrive: Scaling Generative Data in Autonomous Driving via Subject Control [59.20038082523832]
We present SubjectDrive, the first model proven to scale generative data production in a way that could continuously improve autonomous driving applications.
We develop a novel model equipped with a subject control mechanism, which allows the generative model to leverage diverse external data sources for producing varied and useful data.
arXiv Detail & Related papers (2024-03-28T14:07:13Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - On the contribution of pre-trained models to accuracy and utility in
modeling distributed energy resources [0.0]
We evaluate the improvement in predictive accuracy due to pre-trained models, both with and without fine-tuning.
We consider the question of fairness: do pre-trained models create equal improvements for heterogeneous agents, and how does this translate to downstream utility?
arXiv Detail & Related papers (2023-02-22T22:29:40Z) - A Bayesian Generative Adversarial Network (GAN) to Generate Synthetic
Time-Series Data, Application in Combined Sewer Flow Prediction [3.3139597764446607]
In machine learning, generative models are a class of methods capable of learning data distribution to generate artificial data.
In this study, we developed a GAN model to generate synthetic time series to balance our limited recorded time series data.
The aim is to predict the flow using precipitation data and examine the impact of data augmentation using synthetic data in model performance.
arXiv Detail & Related papers (2023-01-31T16:12:26Z) - A Robust Data-driven Process Modeling Applied to Time-series Stochastic
Power Flow [2.7356119162292654]
The proposed model is trained on recorded time-series data of voltage phasors and power injections to perform a time-series power flow calculation.
Our simulation results show that the proposed robust model can handle up to 25% of outliers in the training data set.
arXiv Detail & Related papers (2023-01-06T18:55:44Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - In-flight Novelty Detection with Convolutional Neural Networks [0.0]
This paper proposes that system output measurements are prioritised in real-time for the attention of preventative maintenance decision makers.
We present a data-driven system for online detection and prioritisation of anomalous data.
The system is capable of running in real-time on low-power embedded hardware and is currently in deployment on the Rolls-Royce Pearl 15 engine flight trials.
arXiv Detail & Related papers (2021-12-07T15:19:41Z) - Physics-informed CoKriging model of a redox flow battery [68.8204255655161]
Redox flow batteries (RFBs) offer the capability to store large amounts of energy cheaply and efficiently.
There is a need for fast and accurate models of the charge-discharge curve of a RFB to potentially improve the battery capacity and performance.
We develop a multifidelity model for predicting the charge-discharge curve of a RFB.
arXiv Detail & Related papers (2021-06-17T00:49:55Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z) - Space-Filling Subset Selection for an Electric Battery Model [0.0]
Real driving data on the battery's behavior represent a strongly non-uniform excitation of the system.
Algorithm selects those dynamic data points that fill the input space of the nonlinear model more homogeneously.
It is shown, that this reduction of the training data leads to a higher model quality in comparison to a random subset and a faster training compared to modeling using all data points.
arXiv Detail & Related papers (2020-12-07T09:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.