Time-Varying Parameters as Ridge Regressions
- URL: http://arxiv.org/abs/2009.00401v4
- Date: Thu, 14 Nov 2024 23:15:27 GMT
- Title: Time-Varying Parameters as Ridge Regressions
- Authors: Philippe Goulet Coulombe,
- Abstract summary: Time-varying parameters (TVPs) models are frequently used in economics to capture structural change.
I highlight a rather underutilized fact -- that these are actually ridge regressions.
I use it to study the evolution of monetary policy in Canada using large time-varying local projections.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-varying parameters (TVPs) models are frequently used in economics to capture structural change. I highlight a rather underutilized fact -- that these are actually ridge regressions. Instantly, this makes computations, tuning, and implementation much easier than in the state-space paradigm. Among other things, solving the equivalent dual ridge problem is computationally very fast even in high dimensions, and the crucial "amount of time variation" is tuned by cross-validation. Evolving volatility is dealt with using a two-step ridge regression. I consider extensions that incorporate sparsity (the algorithm selects which parameters vary and which do not) and reduced-rank restrictions (variation is tied to a factor model). To demonstrate the usefulness of the approach, I use it to study the evolution of monetary policy in Canada using large time-varying local projections. The application requires the estimation of about 4600 TVPs, a task well within the reach of the new method.
Related papers
- Adaptive time series forecasting with markovian variance switching [1.2891210250935148]
We propose a new way of estimating variances based on online learning theory.
We adapt expert aggregation methods to learn the variances over time.
We show that this method is robust to misspecification and outperforms traditional expert aggregation.
arXiv Detail & Related papers (2024-02-22T16:40:55Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - Dual-sPLS: a family of Dual Sparse Partial Least Squares regressions for
feature selection and prediction with tunable sparsity; evaluation on
simulated and near-infrared (NIR) data [1.6099403809839032]
The variant presented in this paper, Dual-sPLS, generalizes the classical PLS1 algorithm.
It provides balance between accurate prediction and efficient interpretation.
Code is provided as an open-source package in R.
arXiv Detail & Related papers (2023-01-17T21:50:35Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - ARCH: Efficient Adversarial Regularized Training with Caching [91.74682538906691]
Adversarial regularization can improve model generalization in many natural language processing tasks.
We propose a new adversarial regularization method ARCH, where perturbations are generated and cached once every several epochs.
We evaluate our proposed method on a set of neural machine translation and natural language understanding tasks.
arXiv Detail & Related papers (2021-09-15T02:05:37Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - The temporal overfitting problem with applications in wind power curve
modeling [8.057262184815636]
We propose a new method to tackle the temporal overfitting problem.
Our specific application in this paper targets the power curve modeling in wind energy.
arXiv Detail & Related papers (2020-12-02T17:39:57Z) - An Improved Online Penalty Parameter Selection Procedure for
$\ell_1$-Penalized Autoregressive with Exogenous Variables [1.472161528588343]
The lasso serves to regularize and provide feature selection.
The most popular penalty parameter selection approaches that respect time dependence are very computationally intensive.
We propose enhancing a canonical time series model with a novel online penalty parameter selection procedure.
arXiv Detail & Related papers (2020-10-15T08:32:27Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.