On the Constrained Time-Series Generation Problem
- URL: http://arxiv.org/abs/2307.01717v2
- Date: Thu, 14 Sep 2023 20:58:03 GMT
- Title: On the Constrained Time-Series Generation Problem
- Authors: Andrea Coletta, Sriram Gopalakrishan, Daniel Borrajo, Svitlana
Vyetrenko
- Abstract summary: We propose a novel set of methods to tackle the constrained time series generation problem.
We frame the problem using a constrained optimization framework and then we propose a set of generative methods including "GuidedDiffTime"
Our approaches outperform existing work both qualitatively and quantitatively.
- Score: 1.7731793321727365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic time series are often used in practical applications to augment the
historical time series dataset for better performance of machine learning
algorithms, amplify the occurrence of rare events, and also create
counterfactual scenarios described by the time series.
Distributional-similarity (which we refer to as realism) as well as the
satisfaction of certain numerical constraints are common requirements in
counterfactual time series scenario generation requests. For instance, the US
Federal Reserve publishes synthetic market stress scenarios given by the
constrained time series for financial institutions to assess their performance
in hypothetical recessions. Existing approaches for generating constrained time
series usually penalize training loss to enforce constraints, and reject
non-conforming samples. However, these approaches would require re-training if
we change constraints, and rejection sampling can be computationally expensive,
or impractical for complex constraints. In this paper, we propose a novel set
of methods to tackle the constrained time series generation problem and provide
efficient sampling while ensuring the realism of generated time series. In
particular, we frame the problem using a constrained optimization framework and
then we propose a set of generative methods including "GuidedDiffTime", a
guided diffusion model to generate realistic time series. Empirically, we
evaluate our work on several datasets for financial and energy data, where
incorporating constraints is critical. We show that our approaches outperform
existing work both qualitatively and quantitatively. Most importantly, we show
that our "GuidedDiffTime" model is the only solution where re-training is not
necessary for new constraints, resulting in a significant carbon footprint
reduction, up to 92% w.r.t. existing deep learning methods.
Related papers
- Beyond Data Scarcity: A Frequency-Driven Framework for Zero-Shot Forecasting [15.431513584239047]
Time series forecasting is critical in numerous real-world applications.
Traditional forecasting techniques struggle when data is scarce or not available at all.
Recent advancements often leverage large-scale foundation models for such tasks.
arXiv Detail & Related papers (2024-11-24T07:44:39Z) - Recurrent Neural Goodness-of-Fit Test for Time Series [8.22915954499148]
Time series data are crucial across diverse domains such as finance and healthcare.
Traditional evaluation metrics fall short due to the temporal dependencies and potential high dimensionality of the features.
We propose the REcurrent NeurAL (RENAL) Goodness-of-Fit test, a novel and statistically rigorous framework for evaluating generative time series models.
arXiv Detail & Related papers (2024-10-17T19:32:25Z) - Constrained Posterior Sampling: Time Series Generation with Hard Constraints [21.48057675740321]
Time series samples are crucial for stress-testing models and protecting user privacy by using synthetic data.
Existing approaches for generating constrained time series are either not scalable or degrade sample quality.
We introduce Constrained Posterior Sampling (CPS), a diffusion-based sampling algorithm that aims to project the posterior mean estimate into the constraint set after each denoising update.
CPS outperforms state-of-the-art methods in sample quality and similarity to real time series by around 10% and 42%, respectively, on real-world stocks, traffic, and air quality datasets.
arXiv Detail & Related papers (2024-10-16T15:16:04Z) - An Efficient Rehearsal Scheme for Catastrophic Forgetting Mitigation during Multi-stage Fine-tuning [55.467047686093025]
A common approach to alleviate such forgetting is to rehearse samples from prior tasks during fine-tuning.
We propose a sampling scheme, textttbf mix-cd, that prioritizes rehearsal of collateral damage'' samples.
Our approach is computationally efficient, easy to implement, and outperforms several leading continual learning methods in compute-constrained settings.
arXiv Detail & Related papers (2024-02-12T22:32:12Z) - TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models [52.454274602380124]
Diffusion models heavily depend on the time-step $t$ to achieve satisfactory multi-round denoising.
We propose a Temporal Feature Maintenance Quantization (TFMQ) framework building upon a Temporal Information Block.
Powered by the pioneering block design, we devise temporal information aware reconstruction (TIAR) and finite set calibration (FSC) to align the full-precision temporal features.
arXiv Detail & Related papers (2023-11-27T12:59:52Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Time Series Forecasting Models Copy the Past: How to Mitigate [24.397660153755997]
In the presence of noise and uncertainty, neural network models tend to replicate the last observed value of the time series.
We propose a regularization term penalizing the replication of previously seen values.
Our results indicate that the regularization term mitigates to some extent the aforementioned problem and gives rise to more robust models.
arXiv Detail & Related papers (2022-07-27T10:39:00Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Simultaneously Reconciled Quantile Forecasting of Hierarchically Related
Time Series [11.004159006784977]
We propose a flexible nonlinear model that optimize quantile regression loss coupled with suitable regularization terms to maintain consistency of forecasts across hierarchies.
The theoretical framework introduced herein can be applied to any forecasting model with an underlying differentiable loss function.
arXiv Detail & Related papers (2021-02-25T00:59:01Z) - MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch
Optimization for Deployment Constrained Reinforcement Learning [108.79676336281211]
Continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical.
We propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization.
Our framework discovers novel and high quality samples for each deployment to enable efficient data collection.
arXiv Detail & Related papers (2021-02-23T01:30:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.