Deep Sequence Modeling: Development and Applications in Asset Pricing
- URL: http://arxiv.org/abs/2108.08999v1
- Date: Fri, 20 Aug 2021 04:40:55 GMT
- Title: Deep Sequence Modeling: Development and Applications in Asset Pricing
- Authors: Lin William Cong, Ke Tang, Jingyuan Wang, Yang Zhang
- Abstract summary: We predict asset returns and measure risk premia using a prominent technique from artificial intelligence -- deep sequence modeling.
Because asset returns often exhibit sequential dependence that may not be effectively captured by conventional time series models, sequence modeling offers a promising path with its data-driven approach and superior performance.
- Score: 35.027865343844766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We predict asset returns and measure risk premia using a prominent technique
from artificial intelligence -- deep sequence modeling. Because asset returns
often exhibit sequential dependence that may not be effectively captured by
conventional time series models, sequence modeling offers a promising path with
its data-driven approach and superior performance. In this paper, we first
overview the development of deep sequence models, introduce their applications
in asset pricing, and discuss their advantages and limitations. We then perform
a comparative analysis of these methods using data on U.S. equities. We
demonstrate how sequence modeling benefits investors in general through
incorporating complex historical path dependence, and that Long- and Short-term
Memory (LSTM) based models tend to have the best out-of-sample performance.
Related papers
- Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling [87.17041933863041]
We introduce a Response-conditioned Bradley-Terry (Rc-BT) model that enhances the reward model's capability in length bias mitigating and length instruction following.
We also propose the Rc-DPO algorithm to leverage the Rc-BT model for direct policy optimization (DPO) of large language models.
arXiv Detail & Related papers (2025-02-02T14:50:25Z) - Synthetic Data for Portfolios: A Throw of the Dice Will Never Abolish Chance [0.0]
This paper aims to contribute to a deeper understanding of the limitations of generative models, particularly in portfolio and risk management.
We highlight the inseparable nature of model development and the desired use case by touching on a paradox: generic generative models inherently care less about what is important for constructing portfolios.
arXiv Detail & Related papers (2025-01-07T18:50:24Z) - Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - KAN based Autoencoders for Factor Models [13.512750745176664]
Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models.
Our method introduces a KAN-based autoencoder which surpasses models in both accuracy and interpretability.
Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors.
arXiv Detail & Related papers (2024-08-04T02:02:09Z) - Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development [67.55944651679864]
We present a new sandbox suite tailored for integrated data-model co-development.
This sandbox provides a feedback-driven experimental platform, enabling cost-effective and guided refinement of both data and models.
arXiv Detail & Related papers (2024-07-16T14:40:07Z) - GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling [0.0]
We develop GateLoop, a sequence model that generalizes linear recurrent models such as S4, S5, LRU and RetNet.
GateLoop empirically outperforms existing models for auto-regressive language modeling.
We prove that our approach can be interpreted as providing data-controlled relative-positional information to Attention.
arXiv Detail & Related papers (2023-11-03T14:08:39Z) - Unified Long-Term Time-Series Forecasting Benchmark [0.6526824510982802]
We present a comprehensive dataset designed explicitly for long-term time-series forecasting.
We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records.
To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models.
Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness.
arXiv Detail & Related papers (2023-09-27T18:59:00Z) - Cross-Modal Fine-Tuning: Align then Refine [83.37294254884446]
ORCA is a cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities.
We show that ORCA obtains state-of-the-art results on 3 benchmarks containing over 60 datasets from 12 modalities.
arXiv Detail & Related papers (2023-02-11T16:32:28Z) - Autoregressive Dynamics Models for Offline Policy Evaluation and
Optimization [60.73540999409032]
We show that expressive autoregressive dynamics models generate different dimensions of the next state and reward sequentially conditioned on previous dimensions.
We also show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer.
arXiv Detail & Related papers (2021-04-28T16:48:44Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.