ProbFM: Probabilistic Time Series Foundation Model with Uncertainty Decomposition
- URL: http://arxiv.org/abs/2601.10591v1
- Date: Thu, 15 Jan 2026 17:02:06 GMT
- Title: ProbFM: Probabilistic Time Series Foundation Model with Uncertainty Decomposition
- Authors: Arundeep Chinta, Lucas Vinh Tran, Jay Katukuri,
- Abstract summary: Time Series Foundation Models (TSFMs) have emerged as a promising approach for zero-shot financial forecasting.<n>Current approaches either rely on restrictive distributional assumptions, conflate different sources of uncertainty, or lack principled calibration mechanisms.<n>We present a novel transformer-based probabilistic framework, ProbFM, that leverages Deep Evidential Regression (DER) to provide principled uncertainty quantification.
- Score: 0.12489632787815884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time Series Foundation Models (TSFMs) have emerged as a promising approach for zero-shot financial forecasting, demonstrating strong transferability and data efficiency gains. However, their adoption in financial applications is hindered by fundamental limitations in uncertainty quantification: current approaches either rely on restrictive distributional assumptions, conflate different sources of uncertainty, or lack principled calibration mechanisms. While recent TSFMs employ sophisticated techniques such as mixture models, Student's t-distributions, or conformal prediction, they fail to address the core challenge of providing theoretically-grounded uncertainty decomposition. For the very first time, we present a novel transformer-based probabilistic framework, ProbFM (probabilistic foundation model), that leverages Deep Evidential Regression (DER) to provide principled uncertainty quantification with explicit epistemic-aleatoric decomposition. Unlike existing approaches that pre-specify distributional forms or require sampling-based inference, ProbFM learns optimal uncertainty representations through higher-order evidence learning while maintaining single-pass computational efficiency. To rigorously evaluate the core DER uncertainty quantification approach independent of architectural complexity, we conduct an extensive controlled comparison study using a consistent LSTM architecture across five probabilistic methods: DER, Gaussian NLL, Student's-t NLL, Quantile Loss, and Conformal Prediction. Evaluation on cryptocurrency return forecasting demonstrates that DER maintains competitive forecasting accuracy while providing explicit epistemic-aleatoric uncertainty decomposition. This work establishes both an extensible framework for principled uncertainty quantification in foundation models and empirical evidence for DER's effectiveness in financial applications.
Related papers
- Instrumental and Proximal Causal Inference with Gaussian Processes [24.834836610250765]
We propose a framework for uncertainty-aware causal learning.<n>Our formulation recovers popular kernel estimators as the posterior mean, ensuring predictive precision.<n> Empirical results demonstrate strong predictive performance alongside informative EU quantification.
arXiv Detail & Related papers (2026-03-02T18:23:26Z) - Uncertainty in Federated Granger Causality: From Origins to Systemic Consequences [3.122408196953971]
Granger Causality (GC) provides a rigorous framework for learning causal structures from time-series data.<n> Federated GC algorithms only yield deterministic point estimates of causality and neglect uncertainty.<n>This paper establishes the first methodology for rigorously quantifying uncertainty.
arXiv Detail & Related papers (2026-02-13T15:12:18Z) - Uncertainty Quantification for Deep Regression using Contextualised Normalizing Flows [1.8899300124593648]
We introduce MCNF, a novel uncertainty quantification method that produces both prediction intervals and the full conditioned predictive distribution.<n>MCNF operates on top of the underlying trained predictive model; thus, no predictive model retraining is needed.<n>We provide experimental evidence that the MCNF-based uncertainty estimate is well calibrated, is competitive with state-of-the-art uncertainty quantification methods, and provides richer information for downstream decision-making tasks.
arXiv Detail & Related papers (2025-11-30T11:08:40Z) - Bridging the Gap Between Bayesian Deep Learning and Ensemble Weather Forecasts [100.26854618129039]
Weather forecasting is fundamentally challenged by the chaotic nature of the atmosphere.<n>Recent advances in Bayesian Deep Learning (BDL) offer a promising but often disconnected alternative.<n>We bridge these paradigms through a unified hybrid BDL framework for ensemble weather forecasting.
arXiv Detail & Related papers (2025-11-18T07:49:52Z) - Credal Ensemble Distillation for Uncertainty Quantification [12.36665123584814]
We propose credal ensemble distillation (CED), a framework that compresses a deep ensemble into a single model, CREDIT, for classification tasks.<n>CED achieves superior or comparable uncertainty estimation compared to several existing baselines, while substantially reducing inference overhead compared to deep ensembles.
arXiv Detail & Related papers (2025-11-14T14:53:42Z) - The Illusion of Certainty: Uncertainty quantification for LLMs fails under ambiguity [48.899855816199484]
We introduce MAQA* and AmbigQA*, the first ambiguous question-answering (QA) datasets equipped with ground-truth answer distributions.<n>We show that predictive-distribution and ensemble-based estimators are fundamentally limited under ambiguity.
arXiv Detail & Related papers (2025-11-06T14:46:35Z) - RDIT: Residual-based Diffusion Implicit Models for Probabilistic Time Series Forecasting [4.140149411004857]
RDIT is a plug-and-play framework that combines point estimation and residual-based conditional diffusion with a bidirectional Mamba network.<n>We show that RDIT achieves lower CRPS, rapid inference, and improved coverage compared to strong baselines.
arXiv Detail & Related papers (2025-09-02T14:06:29Z) - Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning [0.19418036471925312]
We introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning.<n>We improve the estimation and mitigation of data-dependent aleatoric uncertainty.<n> Experiments with policy gradient algorithms demonstrate significant performance gains.
arXiv Detail & Related papers (2024-08-05T08:12:25Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.