Space-Filling Subset Selection for an Electric Battery Model
- URL: http://arxiv.org/abs/2012.03541v1
- Date: Mon, 7 Dec 2020 09:12:56 GMT
- Title: Space-Filling Subset Selection for an Electric Battery Model
- Authors: Philipp Gesner, Christian Gletter, Florian Landenberger, Frank
Kirschbaum, Lutz Morawietz, Bernard B\"aker
- Abstract summary: Real driving data on the battery's behavior represent a strongly non-uniform excitation of the system.
Algorithm selects those dynamic data points that fill the input space of the nonlinear model more homogeneously.
It is shown, that this reduction of the training data leads to a higher model quality in comparison to a random subset and a faster training compared to modeling using all data points.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic models of the battery performance are an essential tool throughout
the development process of automotive drive trains. The present study
introduces a method making a large data set suitable for modeling the
electrical impedance. When obtaining data-driven models, a usual assumption is
that more observations produce better models. However, real driving data on the
battery's behavior represent a strongly non-uniform excitation of the system,
which negatively affects the modeling. For that reason, a subset selection of
the available data was developed. It aims at building accurate nonlinear
autoregressive exogenous (NARX) models more efficiently. The algorithm selects
those dynamic data points that fill the input space of the nonlinear model more
homogeneously. It is shown, that this reduction of the training data leads to a
higher model quality in comparison to a random subset and a faster training
compared to modeling using all data points.
Related papers
- When to Trust Your Data: Enhancing Dyna-Style Model-Based Reinforcement Learning With Data Filter [7.886307329450978]
Dyna-style algorithms combine two approaches by using simulated data from an estimated environmental model to accelerate model-free training.
Previous works address this issue by using model ensembles or pretraining the estimated model with data collected from the real environment.
We introduce an out-of-distribution data filter that removes simulated data from the estimated model that significantly diverges from data collected in the real environment.
arXiv Detail & Related papers (2024-10-16T01:49:03Z) - Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - SubjectDrive: Scaling Generative Data in Autonomous Driving via Subject Control [59.20038082523832]
We present SubjectDrive, the first model proven to scale generative data production in a way that could continuously improve autonomous driving applications.
We develop a novel model equipped with a subject control mechanism, which allows the generative model to leverage diverse external data sources for producing varied and useful data.
arXiv Detail & Related papers (2024-03-28T14:07:13Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Predicting traffic signals on transportation networks using
spatio-temporal correlations on graphs [56.48498624951417]
This paper proposes a traffic propagation model that merges multiple heat diffusion kernels into a data-driven prediction model to forecast traffic signals.
We optimize the model parameters using Bayesian inference to minimize the prediction errors and, consequently, determine the mixing ratio of the two approaches.
The proposed model demonstrates prediction accuracy comparable to that of the state-of-the-art deep neural networks with lower computational effort.
arXiv Detail & Related papers (2021-04-27T18:17:42Z) - Robust Data-Driven Error Compensation for a Battery Model [0.0]
Today's massively collected battery data is not yet used for more accurate and reliable simulations.
A data-driven error model is introduced enhancing an existing physically motivated model.
A neural network compensates the existing dynamic error and is further limited based on a description of the underlying data.
arXiv Detail & Related papers (2020-12-31T16:11:36Z) - Iterative Semi-parametric Dynamics Model Learning For Autonomous Racing [2.40966076588569]
We develop and apply an iterative learning semi-parametric model, with a neural network, to the task of autonomous racing.
We show that our model can learn more accurately than a purely parametric model and generalize better than a purely non-parametric model.
arXiv Detail & Related papers (2020-11-17T16:24:10Z) - Reinforcement Learning based dynamic weighing of Ensemble Models for
Time Series Forecasting [0.8399688944263843]
It is known that if models selected for data modelling are distinct (linear/non-linear, static/dynamic) and independent (minimally correlated) models, the accuracy of the predictions is improved.
Various approaches suggested in the literature to weigh the ensemble models use a static set of weights.
To address this issue, a Reinforcement Learning (RL) approach to dynamically assign and update weights of each of the models at different time instants.
arXiv Detail & Related papers (2020-08-20T10:40:42Z) - Experiment data-driven modeling of tokamak discharge in EAST [3.7332349900024013]
A model for tokamak discharge has been done on a superconducting long-pulse tokamak (EAST)
We exploit the temporal sequence of control signals for a large set of EAST discharges to develop a deep learning model for modeling discharge diagnostic signals.
The first try showed promising results for modeling of tokamak discharge by using the data-driven methodology.
arXiv Detail & Related papers (2020-07-21T01:39:27Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.