ReNF: Rethinking the Design Space of Neural Long-Term Time Series Forecasters
- URL: http://arxiv.org/abs/2509.25914v4
- Date: Wed, 05 Nov 2025 07:17:17 GMT
- Title: ReNF: Rethinking the Design Space of Neural Long-Term Time Series Forecasters
- Authors: Yihang Lu, Xianwei Meng, Enhong Chen,
- Abstract summary: We introduce a Multiple Neural Forecasting Theorem that provides a theoretical basis for our approach.<n>We propose Boosted Direct Output (BDO), a novel forecasting strategy that combines the advantages of both Auto-Regressive (AR) and Direct Output (DO)
- Score: 48.79331759671512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Forecasters (NFs) are a cornerstone of Long-term Time Series Forecasting (LTSF). However, progress has been hampered by an overemphasis on architectural complexity at the expense of fundamental forecasting principles. In this work, we return to first principles to redesign the LTSF paradigm. We begin by introducing a Multiple Neural Forecasting Theorem that provides a theoretical basis for our approach. We propose Boosted Direct Output (BDO), a novel forecasting strategy that synergistically combines the advantages of both Auto-Regressive (AR) and Direct Output (DO). In addition, we stabilize the learning process by smoothly tracking the model's parameters. Extensive experiments show that these principled improvements enable a simple MLP to achieve state-of-the-art performance, outperforming recent, complex models in nearly all cases, without any specific considerations in the area. Finally, we empirically verify our theorem, establishing a dynamic performance bound and identifying promising directions for future research. The code for review is available at: .
Related papers
- On Multi-Step Theorem Prediction via Non-Parametric Structural Priors [50.16583672681106]
In this work, we explore training-free theorem prediction through the lens of in-context learning (ICL)<n>We propose Theorem Precedence Graphs, which encode temporal dependencies from historical solution traces as directed graphs, and impose explicit topological constraints that effectively prune the search space during inference.<n>Experiments on the FormalGeo7k benchmark show that our method achieves 89.29% accuracy, substantially outperforming ICL baselines and matching state-of-the-art supervised models.
arXiv Detail & Related papers (2026-03-05T06:08:50Z) - Closing the Loop: A Control-Theoretic Framework for Provably Stable Time Series Forecasting with LLMs [22.486083545585984]
Large Language Models (LLMs) have recently shown exceptional potential in time series forecasting.<n>Existing approaches typically employ a naive autoregressive generation strategy.<n>We propose textbfF-LLM, a novel closed-loop framework.
arXiv Detail & Related papers (2026-02-13T09:35:12Z) - Why Self-Rewarding Works: Theoretical Guarantees for Iterative Alignment of Language Models [50.248686344277246]
Self-Rewarding Language Models (SRLMs) achieve notable success in iteratively improving alignment without external feedback.<n>This paper provides the first rigorous theoretical guarantees for SRLMs.
arXiv Detail & Related papers (2026-01-30T03:45:43Z) - Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models [71.9060068259379]
We propose cascaded domain-wise reinforcement learning to build general-purpose reasoning models.<n>Our 14B model, after RL, outperforms its SFT teacher, DeepSeek-R1-0528, on LiveCodeBench v5/v6 Pro and silver-medal performance in the 2025 International Olympiad in Informatics (IOI)
arXiv Detail & Related papers (2025-12-15T18:02:35Z) - Why Do Transformers Fail to Forecast Time Series In-Context? [21.43699354236011]
Time series forecasting (TSF) remains a challenging and largely unsolved problem in machine learning.<n> Empirical evidence consistently shows that even powerful Transformers often fail to outperform simpler models.<n>We provide a theoretical analysis of Transformers' limitations for TSF through the lens of In-Context Learning (ICL) theory.
arXiv Detail & Related papers (2025-10-10T18:34:19Z) - KANMixer: Can KAN Serve as a New Modeling Core for Long-term Time Series Forecasting? [17.96421618979159]
We introduce KANMixer, a concise architecture integrating a multi-scale mixing backbone that fully leverages KAN's adaptive capabilities.<n>We show that KANMixer achieves state-of-the-art performance in 16 out of 28 experiments across seven benchmark datasets.
arXiv Detail & Related papers (2025-08-03T04:03:13Z) - Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning [10.255235456427037]
We propose a simple yet effective two-stage reinforcement learning framework for achieving concise reasoning in Large Language Models (LLMs)<n>The first stage, using more training steps, aims to incentivize the model's reasoning capabilities via Group Relative Policy Optimization.<n>The second stage, using fewer training steps, explicitly enforces conciseness and improves efficiency via Length-aware Group Relative Policy Optimization.
arXiv Detail & Related papers (2025-05-27T13:29:51Z) - LARES: Latent Reasoning for Sequential Recommendation [96.26996622771593]
We present LARES, a novel and scalable LAtent REasoning framework for Sequential recommendation.<n>Our proposed approach employs a recurrent architecture that allows flexible expansion of reasoning depth without increasing parameter complexity.<n>Our framework exhibits seamless compatibility with existing advanced models, further improving their recommendation performance.
arXiv Detail & Related papers (2025-05-22T16:22:54Z) - Force Matching with Relativistic Constraints: A Physics-Inspired Approach to Stable and Efficient Generative Modeling [26.145559807686706]
We introduce Force Matching (ForM), a framework for generative modeling that incorporates special relativistic mechanics.<n>ForM imposes a velocity constraint, ensuring that sample velocities remain bounded within a constant limit.<n>We show that ForM provides a promising pathway toward achieving stable, efficient, and flexible generative processes.
arXiv Detail & Related papers (2025-02-12T06:30:01Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.