Graph-Regularized Tensor Regression: A Domain-Aware Framework for
Interpretable Multi-Way Financial Modelling
- URL: http://arxiv.org/abs/2211.05581v1
- Date: Wed, 26 Oct 2022 13:39:08 GMT
- Title: Graph-Regularized Tensor Regression: A Domain-Aware Framework for
Interpretable Multi-Way Financial Modelling
- Authors: Yao Lei Xu, Kriton Konstantinidis, Danilo P. Mandic
- Abstract summary: We develop a novel Graph-Regularized Regression (GRTR) framework, whereby knowledge about cross-asset relations is incorporated into the model in the form of a graph Laplacian matrix.
By virtue of tensor algebra, the proposed framework is shown to be fully interpretable, both coefficient-wise and dimension-wise.
The GRTR model is validated in a multi-way financial forecasting setting and is shown to achieve improved performance at reduced computational costs.
- Score: 23.030263841031633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analytics of financial data is inherently a Big Data paradigm, as such data
are collected over many assets, asset classes, countries, and time periods.
This represents a challenge for modern machine learning models, as the number
of model parameters needed to process such data grows exponentially with the
data dimensions; an effect known as the Curse-of-Dimensionality. Recently,
Tensor Decomposition (TD) techniques have shown promising results in reducing
the computational costs associated with large-dimensional financial models
while achieving comparable performance. However, tensor models are often unable
to incorporate the underlying economic domain knowledge. To this end, we
develop a novel Graph-Regularized Tensor Regression (GRTR) framework, whereby
knowledge about cross-asset relations is incorporated into the model in the
form of a graph Laplacian matrix. This is then used as a regularization tool to
promote an economically meaningful structure within the model parameters. By
virtue of tensor algebra, the proposed framework is shown to be fully
interpretable, both coefficient-wise and dimension-wise. The GRTR model is
validated in a multi-way financial forecasting setting and compared against
competing models, and is shown to achieve improved performance at reduced
computational costs. Detailed visualizations are provided to help the reader
gain an intuitive understanding of the employed tensor operations.
Related papers
- Knowledge Graph Embeddings: A Comprehensive Survey on Capturing Relation Properties [5.651919225343915]
Knowledge Graph Embedding (KGE) techniques play a pivotal role in transforming symbolic Knowledge Graphs into numerical representations.
This paper addresses the complex mapping properties inherent in relations, such as one-to-one, one-to-many, many-to-one, and many-to-many mappings.
We explore innovative ideas such as integrating multimodal information into KGE, enhancing relation pattern modeling with rules, and developing models to capture relation characteristics in dynamic KGE settings.
arXiv Detail & Related papers (2024-10-16T08:54:52Z) - Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective [60.64922606733441]
We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of Foundation Models (FMs)
In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style.
arXiv Detail & Related papers (2024-06-17T06:20:39Z) - PanGu-$\pi$: Enhancing Language Model Architectures via Nonlinearity
Compensation [97.78045712375047]
We present a new efficient model architecture for large language models (LLMs)
We show that PanGu-$pi$-7B can achieve a comparable performance to that of benchmarks with about 10% inference speed-up.
In addition, we have deployed PanGu-$pi$-7B in the high-value domains of finance and law, developing an LLM named YunShan for practical application.
arXiv Detail & Related papers (2023-12-27T11:49:24Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Sparse Graphical Linear Dynamical Systems [1.6635799895254402]
Time-series datasets are central in machine learning with applications in numerous fields of science and engineering.
This work proposes a novel approach to bridge the gap by introducing a joint graphical modeling framework.
We present DGLASSO, a new inference method within this framework that implements an efficient block alternating majorization-minimization algorithm.
arXiv Detail & Related papers (2023-07-06T14:10:02Z) - ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion [1.5576879053213302]
This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
arXiv Detail & Related papers (2022-08-15T18:18:05Z) - A Probit Tensor Factorization Model For Relational Learning [31.613211987639296]
We propose a binary tensor factorization model with probit link, which inherits the computation efficiency from the classic tensor factorization model.
Our proposed probit tensor factorization (PTF) model shows advantages in both the prediction accuracy and interpretability.
arXiv Detail & Related papers (2021-11-06T19:23:07Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.