Movement-Prediction-Adjusted Naive Forecast
- URL: http://arxiv.org/abs/2406.14469v9
- Date: Fri, 08 Aug 2025 09:20:46 GMT
- Title: Movement-Prediction-Adjusted Naive Forecast
- Authors: Cheng Zhang,
- Abstract summary: The movement-prediction-adjusted naive forecast (MPANF) is designed to improve point forecasts beyond the naive baseline.<n> MPANF can serve as an effective second-stage method when reliable movement predictions are available.
- Score: 6.935130578959931
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In financial time series forecasting, surpassing the naive forecast is challenging due to the randomness in the data. To address this challenge, this study proposes a novel forecast combination method, the movement-prediction-adjusted naive forecast (MPANF), which is designed to improve point forecasts beyond the naive baseline. Specifically, MPANF integrates two forecasting components: a naive forecast and a movement prediction. The final forecast is generated by adjusting the naive forecast with a movement prediction term, the weight of which is the product of two in-sample quantities: one is a coefficient determined from the movement prediction accuracy and the other is the mean absolute increment. The performance of MPANF was evaluated on eight financial time series via standard metrics, including the RMSE, MAE, MAPE, and sMAPE. Under modest movement prediction accuracy slightly above 0.55, MPANF generally outperforms common benchmarks such as the naive forecast, naive forecast with drift, integrated moving average of order (1,1) (IMA(1,1)), and linear regression. These findings suggest that MPANF can serve as an effective second-stage method when reliable movement predictions are available.
Related papers
- Forecasting the U.S. Treasury Yield Curve: A Distributionally Robust Machine Learning Approach [0.12891210250935145]
We study U.S. Treasury yield curve forecasting under distributional uncertainty.<n>Rather than minimizing average forecast error, the forecaster selects a decision rule that minimizes worst case expected loss.<n>We propose a distributionally robust ensemble forecasting framework that integrates factor models with high dimensional nonparametric machine learning models.
arXiv Detail & Related papers (2026-01-08T05:26:43Z) - Distribution-informed Online Conformal Prediction [53.674678995825666]
We propose Conformal Optimistic Prediction (COP), an online conformal prediction algorithm incorporating underlying data pattern into the update rule.<n>COP produces tighter prediction sets when predictable pattern exists, while retaining valid coverage guarantees even when estimates are inaccurate.<n>We prove that COP can achieve valid coverage and construct shorter prediction intervals than other baselines.
arXiv Detail & Related papers (2025-12-08T17:51:49Z) - Conditional Forecasts and Proper Scoring Rules for Reliable and Accurate Performative Predictions [1.1087735229999816]
We show that conditioning forecasts on covariables that separate them from the outcome renders the target distribution forecast-invariant.<n>We identify two solutions: (i) in decision-theoretic settings, elicitation of correct and incentive-compatible forecasts is possible if forecasts are separating; (ii) scoring with unbiased estimates of the divergence between the forecast and the induced distribution of the target variable yields correct forecasts.<n>Our results expose fundamental limits of classical forecast evaluation and offer new tools for reliable and accurate forecasting in performative settings.
arXiv Detail & Related papers (2025-10-24T10:59:21Z) - Time Series Forecastability Measures [4.136441456697068]
This paper proposes using two metrics to quantify the forecastability of time series prior to model development.<n>The spectral predictability score evaluates the strength and regularity of frequency components in the time series.<n>The Lyapunov exponents quantify the chaos and stability of the system generating the data.
arXiv Detail & Related papers (2025-07-17T22:23:51Z) - Optimal Conformal Prediction under Epistemic Uncertainty [61.46247583794497]
Conformal prediction (CP) is a popular framework for representing uncertainty.<n>We introduce Bernoulli prediction sets (BPS) which produce the smallest prediction sets that ensure conditional coverage.<n>When given first-order predictions, BPS reduces to the well-known adaptive prediction sets (APS)
arXiv Detail & Related papers (2025-05-25T08:32:44Z) - A Hype-Adjusted Probability Measure for NLP Stock Return Forecasting [6.658767709779308]
This article introduces a Hype-Adjusted Probability Measure in the context of a new Natural Language Processing (NLP) approach for stock return and volatility forecasting.<n>A novel sentiment score equation is proposed to represent the impact of intraday news on forecasting next-period stock return and volatility for selected U.S. semiconductor tickers.
arXiv Detail & Related papers (2024-12-10T15:23:31Z) - Optimal starting point for time series forecasting [1.9937737230710553]
We introduce a novel approach called Optimal Starting Point Time Series Forecast (OSP-TSP) for optimal forecasting.
The proposed approach can determine the optimal starting point (OSP) of the time series and then enhance the prediction performances of the base forecasting models.
Empirical results indicate that predictions based on the OSP-TSP approach consistently outperform those using the complete time series dataset.
arXiv Detail & Related papers (2024-09-25T11:51:00Z) - Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - AMP: Autoregressive Motion Prediction Revisited with Next Token Prediction for Autonomous Driving [59.94343412438211]
We introduce the GPT style next token motion prediction into motion prediction.
Different from language data which is composed of homogeneous units -words, the elements in the driving scene could have complex spatial-temporal and semantic relations.
We propose to adopt three factorized attention modules with different neighbors for information aggregation and different position encoding styles to capture their relations.
arXiv Detail & Related papers (2024-03-20T06:22:37Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Efficient Normalized Conformal Prediction and Uncertainty Quantification
for Anti-Cancer Drug Sensitivity Prediction with Deep Regression Forests [0.0]
Conformal Prediction has emerged as a promising method to pair machine learning models with prediction intervals.
We propose a method to estimate the uncertainty of each sample by calculating the variance obtained from a Deep Regression Forest.
arXiv Detail & Related papers (2024-02-21T19:09:53Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce ExBooster, which captures the uncertainty in prediction outcomes by employing multiple random samples.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - SMURF-THP: Score Matching-based UnceRtainty quantiFication for
Transformer Hawkes Process [76.98721879039559]
We propose SMURF-THP, a score-based method for learning Transformer Hawkes process and quantifying prediction uncertainty.
Specifically, SMURF-THP learns the score function of events' arrival time based on a score-matching objective.
We conduct extensive experiments in both event type prediction and uncertainty quantification of arrival time.
arXiv Detail & Related papers (2023-10-25T03:33:45Z) - Performative Time-Series Forecasting [64.03865043422597]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.<n>We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.<n>We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Forecast Hedging and Calibration [8.858351266850544]
We develop the concept of forecast hedging, which consists of choosing the forecasts so as to guarantee the expected track record can only improve.
This yields all the calibration results by the same simple argument while differentiating between them by the forecast-hedging tools used.
Additional contributions are an improved definition of continuous calibration, ensuing game dynamics that yield Nashlibria in the long run, and a new forecasting procedure for binary events that is simpler than all known such procedures.
arXiv Detail & Related papers (2022-10-13T16:48:25Z) - Conformal Prediction Bands for Two-Dimensional Functional Time Series [0.0]
Time evolving surfaces can be modeled as two-dimensional Functional time series, exploiting the tools of Functional data analysis.
The main focus revolves around Conformal Prediction, a versatile non-parametric paradigm used to quantify uncertainty in prediction problems.
A probabilistic forecasting scheme for two-dimensional functional time series is presented, while providing an extension of Functional Autoregressive Processes of order one to this setting.
arXiv Detail & Related papers (2022-07-27T17:23:14Z) - SwinVRNN: A Data-Driven Ensemble Forecasting Model via Learned
Distribution Perturbation [16.540748935603723]
We propose a Swin Transformer-based Variational Recurrent Neural Network (SwinVRNN), which is a weather forecasting model combining a SwinRNN predictor with a perturbation module.
SwinVRNN surpasses operational ECMWF Integrated Forecasting System (IFS) on surface variables of 2-m temperature and 6-hourly total precipitation at all lead times up to five days.
arXiv Detail & Related papers (2022-05-26T05:11:58Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated
Failure Time Models [11.171712535005357]
We propose Deep Kernel Accelerated Failure Time models for the time-to-event prediction task.
Our model shows better point estimate performance than recurrent neural network based baselines in experiments on two real-world datasets.
arXiv Detail & Related papers (2021-07-26T14:55:02Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Beyond Point Estimate: Inferring Ensemble Prediction Variation from
Neuron Activation Strength in Recommender Systems [21.392694985689083]
Ensemble method is one state-of-the-art benchmark for prediction uncertainty estimation.
We observe that prediction variations come from various randomness sources.
We propose to infer prediction variation from neuron activation strength and demonstrate the strong prediction power from activation strength features.
arXiv Detail & Related papers (2020-08-17T00:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.