Deep Partial Least Squares for Empirical Asset Pricing
- URL: http://arxiv.org/abs/2206.10014v1
- Date: Mon, 20 Jun 2022 21:30:39 GMT
- Title: Deep Partial Least Squares for Empirical Asset Pricing
- Authors: Matthew F. Dixon, Nicholas G. Polson and Kemen Goicoechea
- Abstract summary: We use deep partial least squares (DPLS) to estimate an asset pricing model for individual stock returns.
The novel contribution is to resolve the nonlinear factor structure, thus advancing the current paradigm of deep learning in empirical asset pricing.
- Score: 0.4511923587827302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use deep partial least squares (DPLS) to estimate an asset pricing model
for individual stock returns that exploits conditioning information in a
flexible and dynamic way while attributing excess returns to a small set of
statistical risk factors. The novel contribution is to resolve the non-linear
factor structure, thus advancing the current paradigm of deep learning in
empirical asset pricing which uses linear stochastic discount factors under an
assumption of Gaussian asset returns and factors. This non-linear factor
structure is extracted by using projected least squares to jointly project firm
characteristics and asset returns on to a subspace of latent factors and using
deep learning to learn the non-linear map from the factor loadings to the asset
returns. The result of capturing this non-linear risk factor structure is to
characterize anomalies in asset returns by both linear risk factor exposure and
interaction effects. Thus the well known ability of deep learning to capture
outliers, shed lights on the role of convexity and higher order terms in the
latent factor structure on the factor risk premia. On the empirical side, we
implement our DPLS factor models and exhibit superior performance to LASSO and
plain vanilla deep learning models. Furthermore, our network training times are
significantly reduced due to the more parsimonious architecture of DPLS.
Specifically, using 3290 assets in the Russell 1000 index over a period of
December 1989 to January 2018, we assess our DPLS factor model and generate
information ratios that are approximately 1.2x greater than deep learning. DPLS
explains variation and pricing errors and identifies the most prominent latent
factors and firm characteristics.
Related papers
- Sparsing Law: Towards Large Language Models with Greater Activation Sparsity [62.09617609556697]
Activation sparsity denotes the existence of substantial weakly-contributed elements within activation outputs that can be eliminated.
We propose PPL-$p%$ sparsity, a precise and performance-aware activation sparsity metric.
We show that ReLU is more efficient as the activation function than SiLU and can leverage more training data to improve activation sparsity.
arXiv Detail & Related papers (2024-11-04T17:59:04Z) - FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models [50.331708897857574]
We introduce FactorLLM, a novel approach that decomposes well-trained dense FFNs into sparse sub-networks without requiring any further modifications.
FactorLLM achieves comparable performance to the source model securing up to 85% model performance while obtaining over a 30% increase in inference speed.
arXiv Detail & Related papers (2024-08-15T16:45:16Z) - KAN based Autoencoders for Factor Models [13.512750745176664]
Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models.
Our method introduces a KAN-based autoencoder which surpasses models in both accuracy and interpretability.
Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors.
arXiv Detail & Related papers (2024-08-04T02:02:09Z) - Application of Deep Learning for Factor Timing in Asset Management [21.212548040046133]
More flexible models have better performance in explaining the variance in factor premium of the unseen period.
For flexible models like neural networks, the optimal weights based on their prediction tend to be unstable.
We verify that tilting down the rebalance frequency according to the historical optimal rebalancing scheme can help reduce the transaction costs.
arXiv Detail & Related papers (2024-04-27T21:57:17Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - Layer-wise Feedback Propagation [53.00944147633484]
We present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors.
LFP assigns rewards to individual connections based on their respective contributions to solving a given task.
We demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Deep Learning Based Residuals in Non-linear Factor Models: Precision
Matrix Estimation of Returns with Low Signal-to-Noise Ratio [0.0]
This paper introduces a consistent estimator and rate of convergence for the precision matrix of asset returns in large portfolios.
Our estimator remains valid even in low signal-to-noise ratio environments typical for financial markets.
arXiv Detail & Related papers (2022-09-09T20:29:54Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Deep Risk Model: A Deep Learning Solution for Mining Latent Risk Factors
to Improve Covariance Matrix Estimation [8.617532047238461]
We propose a deep learning solution to effectively "design" risk factors with neural networks.
Our method can obtain $1.9%$ higher explained variance measured by $R2$ and also reduce the risk of a global minimum variance portfolio.
arXiv Detail & Related papers (2021-07-12T05:30:50Z) - The Low-volatility Anomaly and the Adaptive Multi-Factor Model [0.0]
The paper provides a new explanation of the low-volatility anomaly.
We use the Adaptive Multi-Factor (AMF) model estimated by the Groupwise Interpretable Basis Selection (GIBS) algorithm to find basis assets significantly related to low and high volatility portfolios.
arXiv Detail & Related papers (2020-03-16T20:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.