Bayesian Uncertainty Quantification with Anchored Ensembles for Robust EV Power Consumption Prediction
- URL: http://arxiv.org/abs/2511.06538v1
- Date: Sun, 09 Nov 2025 20:49:42 GMT
- Title: Bayesian Uncertainty Quantification with Anchored Ensembles for Robust EV Power Consumption Prediction
- Authors: Ghazal Farhani, Taufiq Rahman, Kieran Humphries,
- Abstract summary: EV power estimation underpins range prediction and energy management, yet practitioners need both point accuracy and trustworthy uncertainty.<n>We propose an anchored-ensemble Long Short-Term Memory (LSTM) with a Student-t likelihood that jointly captures uncertainty.<n>Our model attains strong accuracy: RMSE 3.36 +/- 1.10, MAE 2.21 +/- 0.89, R-squared = 0.93 +/- 0.02, explained variance 0.93 +/- 0.02, and delivers well-calibrated uncertainty bands with near-nominal coverage.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate EV power estimation underpins range prediction and energy management, yet practitioners need both point accuracy and trustworthy uncertainty. We propose an anchored-ensemble Long Short-Term Memory (LSTM) with a Student-t likelihood that jointly captures epistemic (model) and aleatoric (data) uncertainty. Anchoring imposes a Gaussian weight prior (MAP training), yielding posterior-like diversity without test-time sampling, while the t-head provides heavy-tailed robustness and closed-form prediction intervals. Using vehicle-kinematic time series (e.g., speed, motor RPM), our model attains strong accuracy: RMSE 3.36 +/- 1.10, MAE 2.21 +/- 0.89, R-squared = 0.93 +/- 0.02, explained variance 0.93 +/- 0.02, and delivers well-calibrated uncertainty bands with near-nominal coverage. Against competitive baselines (Student-t MC dropout; quantile regression with/without anchoring), our method matches or improves log-scores while producing sharper intervals at the same coverage. Crucially for real-time deployment, inference is a single deterministic pass per ensemble member (or a weight-averaged collapse), eliminating Monte Carlo latency. The result is a compact, theoretically grounded estimator that couples accuracy, calibration, and systems efficiency, enabling reliable range estimation and decision-making for production EV energy management.
Related papers
- ZIP-RC: Optimizing Test-Time Compute via Zero-Overhead Joint Reward-Cost Prediction [57.799425838564]
We present ZIP-RC, an adaptive inference method that equips models with zero-overhead inference-time predictions of reward and cost.<n> ZIP-RC improves accuracy by up to 12% over majority voting at equal or lower average cost.
arXiv Detail & Related papers (2025-12-01T09:44:31Z) - Calibrated Decomposition of Aleatoric and Epistemic Uncertainty in Deep Features for Inference-Time Adaptation [3.018583625592182]
Most estimators collapse all uncertainty modes into a single confidence score, preventing reliable reasoning about when to allocate more compute or adjust inference.<n>We introduce Uncertainty-Guided Inference-Time Selection, a lightweight inference time framework that disentangles aleatoric (data-driven) and model-driven uncertainty directly in deep feature space.
arXiv Detail & Related papers (2025-11-15T23:47:30Z) - Multi-Task Deep Learning for Surface Metrology [0.0]
A reproducible deep learning framework is presented for surface metrology to predict surface texture parameters.<n>Uncertainty is modelled via quantile and heteroscedastic heads with post-hoc conformal calibration to yield calibrated intervals.
arXiv Detail & Related papers (2025-10-23T08:38:18Z) - A Conditional Diffusion Model for Probabilistic Prediction of Battery Capacity Degradation [0.0]
This paper presents a novel method, termed the Diffusion U-Net with Attention (CDUA), which integrates feature engineering and deep learning to address this challenge.<n>The proposed approach employs a diffusion-based generative model for time-series forecasting and incorporates attention mechanisms to enhance predictive performance.<n> Experimental validation on the real-world vehicle data demonstrates that the proposed CDUA model achieves a relative Mean Absolute Error (MAE) of 0.94% and a relative Root Mean Square Error (RMSE) of 1.14%, with a narrow 95% confidence interval of 3.74% in relative width.
arXiv Detail & Related papers (2025-10-20T10:56:28Z) - Robust Fine-tuning of Zero-shot Models via Variance Reduction [56.360865951192324]
When fine-tuning zero-shot models, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD)
We propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs.
arXiv Detail & Related papers (2024-11-11T13:13:39Z) - On the good reliability of an interval-based metric to validate prediction uncertainty for machine learning regression tasks [0.0]
This study presents an opportunistic approach to a (more) reliable validation method for prediction uncertainty average calibration.
Considering that variance-based calibration metrics are quite sensitive to the presence of heavy tails in the uncertainty and error distributions, a shift is proposed to an interval-based metric, the Prediction Interval Coverage Probability (PICP)
The resulting PICPs are more quickly and reliably tested than variance-based calibration metrics.
arXiv Detail & Related papers (2024-08-23T14:16:10Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.<n>We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.<n>We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Predicting Overtakes in Trucks Using CAN Data [51.28632782308621]
We investigate the detection of truck overtakes from CAN data.
Our analysis covers up to 10 seconds before the overtaking event.
We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger.
arXiv Detail & Related papers (2024-04-08T17:58:22Z) - Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes [44.974100402600165]
We study the evaluation of a policy best-parametric and worst-case perturbations to a decision process (MDP)
We use transition observations from the original MDP, whether they are generated under the same or a different policy.
Our estimator is also estimated statistical inference using Wald confidence intervals.
arXiv Detail & Related papers (2024-03-29T18:11:49Z) - TIC-TAC: A Framework for Improved Covariance Estimation in Deep Heteroscedastic Regression [109.69084997173196]
Deepscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood.
Recent works show that this may result in sub-optimal convergence due to the challenges associated with covariance estimation.
We study two questions: (1) Does the predicted covariance truly capture the randomness of the predicted mean?
Our results show that not only does TIC accurately learn the covariance, it additionally facilitates an improved convergence of the negative log-likelihood.
arXiv Detail & Related papers (2023-10-29T09:54:03Z) - Score Matching-based Pseudolikelihood Estimation of Neural Marked
Spatio-Temporal Point Process with Uncertainty Quantification [59.81904428056924]
We introduce SMASH: a Score MAtching estimator for learning markedPs with uncertainty quantification.
Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of markedPs through score-matching.
The superior performance of our proposed framework is demonstrated through extensive experiments in both event prediction and uncertainty quantification.
arXiv Detail & Related papers (2023-10-25T02:37:51Z) - Estimation of Accurate and Calibrated Uncertainties in Deterministic
models [0.8702432681310401]
We devise a method to transform a deterministic prediction into a probabilistic one.
We show that for doing so, one has to compromise between the accuracy and the reliability (calibration) of such a model.
We show several examples both with synthetic data, where the underlying hidden noise can accurately be recovered, and with large real-world datasets.
arXiv Detail & Related papers (2020-03-11T04:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.