Macroeconomic Data Transformations Matter
- URL: http://arxiv.org/abs/2008.01714v2
- Date: Tue, 9 Mar 2021 16:37:56 GMT
- Title: Macroeconomic Data Transformations Matter
- Authors: Philippe Goulet Coulombe, Maxime Leroux, Dalibor Stevanovic,
St\'ephane Surprenant
- Abstract summary: In a low-dimensional linear regression setup, considering linear transformations/combinations of predictors does not alter predictions.
This is precisely the fabric of the machine learning (ML) macroeconomic forecasting environment.
It is found that traditional factors should almost always be included as predictors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a low-dimensional linear regression setup, considering linear
transformations/combinations of predictors does not alter predictions. However,
when the forecasting technology either uses shrinkage or is nonlinear, it does.
This is precisely the fabric of the machine learning (ML) macroeconomic
forecasting environment. Pre-processing of the data translates to an alteration
of the regularization -- explicit or implicit -- embedded in ML algorithms. We
review old transformations and propose new ones, then empirically evaluate
their merits in a substantial pseudo-out-sample exercise. It is found that
traditional factors should almost always be included as predictors and moving
average rotations of the data can provide important gains for various
forecasting targets. Also, we note that while predicting directly the average
growth rate is equivalent to averaging separate horizon forecasts when using
OLS-based techniques, the latter can substantially improve on the former when
regularization and/or nonparametric nonlinearities are involved.
Related papers
- Progression: an extrapolation principle for regression [0.0]
We propose a novel statistical extrapolation principle.
It assumes a simple relationship between predictors and the response at the boundary of the training predictor samples.
Our semi-parametric method, progression, leverages this extrapolation principle and offers guarantees on the approximation error beyond the training data range.
arXiv Detail & Related papers (2024-10-30T17:29:51Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Forecasting inflation using disaggregates and machine learning [0.0]
We consider different disaggregation levels for inflation and employ a range of traditional time series techniques as well as linear and nonlinear machine learning (ML) models to deal with a larger number of predictors.
For many forecast horizons, the aggregation of disaggregated forecasts performs just as well survey-based expectations and models that generate forecasts using the aggregate directly.
Our results reinforce the benefits of using models in a data-rich environment for inflation forecasting, including aggregating disaggregated forecasts from ML techniques.
arXiv Detail & Related papers (2023-08-22T04:01:40Z) - On LASSO for High Dimensional Predictive Regression [0.0]
This paper examines LASSO, a widely-used $L_1$-penalized regression method, in high dimensional linear predictive regressions.
The consistency of LASSO is contingent upon two key components: the deviation bound of the cross product of the regressors and the error term.
Using machine learning and macroeconomic domain expertise, LASSO demonstrates strong performance in forecasting the unemployment rate.
arXiv Detail & Related papers (2022-12-14T06:14:58Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - How is Machine Learning Useful for Macroeconomic Forecasting? [0.0]
We study the usefulness of the underlying features driving ML gains over standard macroeconometric methods.
We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments.
This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions.
arXiv Detail & Related papers (2020-08-28T04:23:52Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.