Stratify: Unifying Multi-Step Forecasting Strategies
- URL: http://arxiv.org/abs/2412.20510v1
- Date: Sun, 29 Dec 2024 16:06:46 GMT
- Title: Stratify: Unifying Multi-Step Forecasting Strategies
- Authors: Riku Green, Grant Stevens, Zahraa Abdallah, Telmo M. Silva Filho,
- Abstract summary: Stratify is a framework that addresses multi-step forecasting and unifying existing strategies.
In over 84% of 1080 experiments, novel strategies in Stratify improved performance compared to all existing ones.
Our results are the most comprehensive benchmarking of known and novel forecasting strategies.
- Score: 0.0
- License:
- Abstract: A key aspect of temporal domains is the ability to make predictions multiple time steps into the future, a process known as multi-step forecasting (MSF). At the core of this process is selecting a forecasting strategy, however, with no existing frameworks to map out the space of strategies, practitioners are left with ad-hoc methods for strategy selection. In this work, we propose Stratify, a parameterised framework that addresses multi-step forecasting, unifying existing strategies and introducing novel, improved strategies. We evaluate Stratify on 18 benchmark datasets, five function classes, and short to long forecast horizons (10, 20, 40, 80). In over 84% of 1080 experiments, novel strategies in Stratify improved performance compared to all existing ones. Importantly, we find that no single strategy consistently outperforms others in all task settings, highlighting the need for practitioners explore the Stratify space to carefully search and select forecasting strategies based on task-specific requirements. Our results are the most comprehensive benchmarking of known and novel forecasting strategies. We make code available to reproduce our results.
Related papers
- EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning [69.55982246413046]
We propose explicit policy optimization (EPO) for strategic reasoning.
EPO provides strategies in open-ended action space and can be plugged into arbitrary LLM agents to motivate goal-directed behavior.
Experiments across social and physical domains demonstrate EPO's ability of long-term goal alignment.
arXiv Detail & Related papers (2025-02-18T03:15:55Z) - Deep Reinforcement Learning for Online Optimal Execution Strategies [49.1574468325115]
This paper tackles the challenge of learning non-Markovian optimal execution strategies in dynamic financial markets.
We introduce a novel actor-critic algorithm based on Deep Deterministic Policy Gradient (DDPG)
We show that our algorithm successfully approximates the optimal execution strategy.
arXiv Detail & Related papers (2024-10-17T12:38:08Z) - Ensembling Portfolio Strategies for Long-Term Investments: A Distribution-Free Preference Framework for Decision-Making and Algorithms [0.0]
This paper investigates the problem of ensembling multiple strategies for sequential portfolios to outperform individual strategies in terms of long-term wealth.
We introduce a novel framework for decision-making in combining strategies, irrespective of market conditions.
We show results in favor of the proposed strategies, albeit with small tradeoffs in their Sharpe ratios.
arXiv Detail & Related papers (2024-06-05T23:08:57Z) - Time-Series Classification for Dynamic Strategies in Multi-Step
Forecasting [0.37141182051230903]
Multi-step forecasting (MSF) in time-series is fundamental to almost all temporal domains.
Previous work shows that it is not clear which forecasting strategy is optimal a priori to evaluating on unseen data.
We propose Dynamic Strategies (DyStrat) for MSF.
arXiv Detail & Related papers (2024-02-13T11:10:14Z) - StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving [76.5322280307861]
StrategyLLM allows LLMs to perform inductive reasoning, deriving general strategies from specific task instances, and deductive reasoning, applying these general strategies to particular task examples, for constructing generalizable and consistent few-shot prompts.
Experimental results demonstrate that StrategyLLM outperforms the competitive baseline CoT-SC that requires human-annotated solutions on 13 datasets across 4 challenging tasks without human involvement, including math reasoning (34.2% $rightarrow$ 38.8%), commonsense reasoning (70.3% $rightarrow$ 72.5%), algorithmic reasoning (73.7% $rightarrow$ 85.0
arXiv Detail & Related papers (2023-11-15T09:18:09Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - ImitAL: Learned Active Learning Strategy on Synthetic Data [30.595138995552748]
We propose ImitAL, a domain-independent novel query strategy, which encodes AL as a learning-to-rank problem.
We train ImitAL on large-scale simulated AL runs on purely synthetic datasets.
To show that ImitAL was successfully trained, we perform an extensive evaluation comparing our strategy on 13 different datasets.
arXiv Detail & Related papers (2022-08-24T16:17:53Z) - Optimal Strategies of Quantum Metrology with a Strict Hierarchy [3.706744588098214]
We identify the ultimate precision limit of different families of strategies, including the parallel, the sequential, and the indefinite-causal-order strategies.
We provide an efficient algorithm that determines an optimal strategy within the family of strategies under consideration.
arXiv Detail & Related papers (2022-03-18T06:14:56Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Can Global Optimization Strategy Outperform Myopic Strategy for Bayesian
Parameter Estimation? [0.0]
This paper provides a discouraging answer based on experimental simulations comparing the performance improvement and burden between global and myopic strategies.
The added horizon in global strategies has negligible contributions to the improvement of optimal global utility other than the most immediate next steps.
arXiv Detail & Related papers (2020-07-01T10:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.