Efficiently Generating Correlated Sample Paths from Multi-step Time Series Foundation Models
- URL: http://arxiv.org/abs/2510.02224v1
- Date: Thu, 02 Oct 2025 17:08:58 GMT
- Title: Efficiently Generating Correlated Sample Paths from Multi-step Time Series Foundation Models
- Authors: Ethan Baron, Boris Oreshkin, Ruijun Ma, Hanyu Zhang, Kari Torkkola, Michael W. Mahoney, Andrew Gordon Wilson, Tatiana Konstantinova,
- Abstract summary: We present a copula-based approach to efficiently generate accurate, correlated sample paths from time series foundation models.<n>Our approach generates correlated sample paths orders of magnitude faster than autoregressive sampling.
- Score: 66.60042743462175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many time series applications require access to multi-step forecast trajectories in the form of sample paths. Recently, time series foundation models have leveraged multi-step lookahead predictions to improve the quality and efficiency of multi-step forecasts. However, these models only predict independent marginal distributions for each time step, rather than a full joint predictive distribution. To generate forecast sample paths with realistic correlation structures, one typically resorts to autoregressive sampling, which can be extremely expensive. In this paper, we present a copula-based approach to efficiently generate accurate, correlated sample paths from existing multi-step time series foundation models in one forward pass. Our copula-based approach generates correlated sample paths orders of magnitude faster than autoregressive sampling, and it yields improved sample path quality by mitigating the snowballing error phenomenon.
Related papers
- Accelerated Sequential Flow Matching: A Bayesian Filtering Perspective [16.29333060724397]
We introduce Sequential Flow Matching, a principled framework grounded in Bayesian filtering.<n>By treating streaming inference as learning a probability flow that transports the predictive distribution from one time step to the next, our approach naturally aligns with the structure of Bayesian belief updates.<n>Our method achieves performance competitive with full-step diffusion while requiring only one or very few sampling steps, therefore with faster sampling.
arXiv Detail & Related papers (2026-02-05T05:37:14Z) - Adaptive Conformal Prediction Intervals Over Trajectory Ensembles [50.31074512684758]
Future trajectories play an important role across domains such as autonomous driving, hurricane forecasting, and epidemic modeling.<n>We propose a unified framework based on conformal prediction that transforms sampled trajectories into calibrated prediction intervals with theoretical coverage guarantees.
arXiv Detail & Related papers (2025-08-18T21:14:07Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series Forecasting [64.45587649141842]
Time-series forecasting plays a critical role in many real-world applications.<n>No single model consistently outperforms others across different test samples, but instead (ii) each model excels in specific cases.<n>We introduce TimeFuse, a framework for collective time-series forecasting with sample-level adaptive fusion of heterogeneous models.
arXiv Detail & Related papers (2025-05-24T00:45:07Z) - Convergence Of Consistency Model With Multistep Sampling Under General Data Assumptions [11.317363635566517]
We study the convergence of consistency models when the self-consistency property holds approximately under the training distribution.<n>Our analysis requires only mild data assumption and applies to a family of forward processes.
arXiv Detail & Related papers (2025-05-06T05:31:10Z) - Quantizing Diffusion Models from a Sampling-Aware Perspective [43.95032520555463]
We propose a sampling-aware quantization strategy, wherein a Mixed-Order Trajectory Alignment technique is devised.<n>Experiments on sparse-step fast sampling across multiple datasets demonstrate that our approach preserves the rapid convergence characteristics of high-speed samplers.
arXiv Detail & Related papers (2025-05-04T20:50:44Z) - Single-Step Consistent Diffusion Samplers [8.758218443992467]
Existing sampling algorithms typically require many iterative steps to produce high-quality samples.<n>We introduce consistent diffusion samplers, a new class of samplers designed to generate high-fidelity samples in a single step.<n>We show that our approach yields high-fidelity samples using less than 1% of the network evaluations required by traditional diffusion samplers.
arXiv Detail & Related papers (2025-02-11T14:25:52Z) - One Step Diffusion via Shortcut Models [109.72495454280627]
We introduce shortcut models, a family of generative models that use a single network and training phase to produce high-quality samples.<n>Shortcut models condition the network on the current noise level and also on the desired step size, allowing the model to skip ahead in the generation process.<n>Compared to distillation, shortcut models reduce complexity to a single network and training phase and additionally allow varying step budgets at inference time.
arXiv Detail & Related papers (2024-10-16T13:34:40Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Temporal Latent Auto-Encoder: A Method for Probabilistic Multivariate
Time Series Forecasting [4.131842516813833]
We introduce a novel temporal latent auto-encoder method which enables nonlinear factorization of time series.
By imposing a probabilistic latent space model, complex distributions of the input series are modeled via the decoder.
Our model achieves state-of-the-art performance on many popular multivariate datasets, with gains sometimes as high as $50%$ for several standard metrics.
arXiv Detail & Related papers (2021-01-25T22:29:40Z) - Diverse Sampling for Normalizing Flow Based Trajectory Forecasting [34.01303881881315]
We propose Diversity Sampling for Flow (DSF) to improve the quality and diversity of trajectory samples from a pre-trained flow model.
DSF is easy to implement, and we show that it offers a simple plug-in improvement for several existing flow-based forecasting models.
arXiv Detail & Related papers (2020-11-30T18:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.