KAN based Autoencoders for Factor Models
- URL: http://arxiv.org/abs/2408.02694v1
- Date: Sun, 4 Aug 2024 02:02:09 GMT
- Title: KAN based Autoencoders for Factor Models
- Authors: Tianqi Wang, Shubham Singh,
- Abstract summary: Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models.
Our method introduces a KAN-based autoencoder which surpasses models in both accuracy and interpretability.
Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors.
- Score: 13.512750745176664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models. While previous machine learning applications in asset pricing have predominantly used Multilayer Perceptrons with ReLU activation functions to model latent factor exposures, our method introduces a KAN-based autoencoder which surpasses MLP models in both accuracy and interpretability. Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors. Empirical backtesting demonstrates our model's superior ability to explain cross-sectional risk exposures. Moreover, long-short portfolios constructed using our model's predictions achieve higher Sharpe ratios, highlighting its practical value in investment management.
Related papers
- NeuralFactors: A Novel Factor Learning Approach to Generative Modeling of Equities [0.0]
We introduce NeuralFactors, a novel machine-learning based approach to factor analysis where a neural network outputs factor exposures and factor returns.
We show that this model outperforms prior approaches in terms of log-likelihood performance and computational efficiency.
arXiv Detail & Related papers (2024-08-02T18:01:09Z) - Application of Deep Learning for Factor Timing in Asset Management [21.212548040046133]
More flexible models have better performance in explaining the variance in factor premium of the unseen period.
For flexible models like neural networks, the optimal weights based on their prediction tend to be unstable.
We verify that tilting down the rebalance frequency according to the historical optimal rebalancing scheme can help reduce the transaction costs.
arXiv Detail & Related papers (2024-04-27T21:57:17Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Deep Partial Least Squares for Empirical Asset Pricing [0.4511923587827302]
We use deep partial least squares (DPLS) to estimate an asset pricing model for individual stock returns.
The novel contribution is to resolve the nonlinear factor structure, thus advancing the current paradigm of deep learning in empirical asset pricing.
arXiv Detail & Related papers (2022-06-20T21:30:39Z) - How robust are pre-trained models to distribution shift? [82.08946007821184]
We show how spurious correlations affect the performance of popular self-supervised learning (SSL) and auto-encoder based models (AE)
We develop a novel evaluation scheme with the linear head trained on out-of-distribution (OOD) data, to isolate the performance of the pre-trained models from a potential bias of the linear head used for evaluation.
arXiv Detail & Related papers (2022-06-17T16:18:28Z) - Deep Sequence Modeling: Development and Applications in Asset Pricing [35.027865343844766]
We predict asset returns and measure risk premia using a prominent technique from artificial intelligence -- deep sequence modeling.
Because asset returns often exhibit sequential dependence that may not be effectively captured by conventional time series models, sequence modeling offers a promising path with its data-driven approach and superior performance.
arXiv Detail & Related papers (2021-08-20T04:40:55Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting [0.0]
Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets.
We propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning.
arXiv Detail & Related papers (2021-04-19T19:20:22Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.