Contrastive Learning of Asset Embeddings from Financial Time Series
- URL: http://arxiv.org/abs/2407.18645v1
- Date: Fri, 26 Jul 2024 10:26:44 GMT
- Title: Contrastive Learning of Asset Embeddings from Financial Time Series
- Authors: Rian Dolphin, Barry Smyth, Ruihai Dong,
- Abstract summary: We propose a novel contrastive learning framework to generate asset embeddings from financial time series data.
Our approach leverages the similarity of asset returns over many subwindows to generate informative positive and negative samples.
Experiments on real-world datasets demonstrate the effectiveness of the learned asset embeddings on benchmark industry classification and portfolio optimization tasks.
- Score: 8.595725772518332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representation learning has emerged as a powerful paradigm for extracting valuable latent features from complex, high-dimensional data. In financial domains, learning informative representations for assets can be used for tasks like sector classification, and risk management. However, the complex and stochastic nature of financial markets poses unique challenges. We propose a novel contrastive learning framework to generate asset embeddings from financial time series data. Our approach leverages the similarity of asset returns over many subwindows to generate informative positive and negative samples, using a statistical sampling strategy based on hypothesis testing to address the noisy nature of financial data. We explore various contrastive loss functions that capture the relationships between assets in different ways to learn a discriminative representation space. Experiments on real-world datasets demonstrate the effectiveness of the learned asset embeddings on benchmark industry classification and portfolio optimization tasks. In each case our novel approaches significantly outperform existing baselines highlighting the potential for contrastive learning to capture meaningful and actionable relationships in financial data.
Related papers
- Uniting contrastive and generative learning for event sequences models [51.547576949425604]
This study investigates the integration of two self-supervised learning techniques - instance-wise contrastive learning and a generative approach based on restoring masked events in latent space.
Experiments conducted on several public datasets, focusing on sequence classification and next-event type prediction, show that the integrated method achieves superior performance compared to individual approaches.
arXiv Detail & Related papers (2024-08-19T13:47:17Z) - Multi-Task Learning for Features Extraction in Financial Annual Reports [6.442186704634672]
We use various multi-task learning methods for financial text classification.
We focus on financial sentiment, objectivity, forward-looking sentence prediction and ESG-content detection.
We extract textual features from annual reports of FTSE350 companies and investigate the link between ESG quantitative scores and these features.
arXiv Detail & Related papers (2024-04-08T08:13:40Z) - CTP: Towards Vision-Language Continual Pretraining via Compatible
Momentum Contrast and Topology Preservation [128.00940554196976]
Vision-Language Continual Pretraining (VLCP) has shown impressive results on diverse downstream tasks by offline training on large-scale datasets.
To support the study of Vision-Language Continual Pretraining (VLCP), we first contribute a comprehensive and unified benchmark dataset P9D.
The data from each industry as an independent task supports continual learning and conforms to the real-world long-tail nature to simulate pretraining on web data.
arXiv Detail & Related papers (2023-08-14T13:53:18Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Stock Embeddings: Learning Distributed Representations for Financial
Assets [11.67728795230542]
We propose a neural model for training stock embeddings, which harnesses the dynamics of historical returns data.
We describe our approach in detail and discuss a number of ways that it can be used in the financial domain.
arXiv Detail & Related papers (2022-02-14T15:39:06Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Fair Active Learning: Solving the Labeling Problem in Insurance [2.5470832667329213]
The paper explores various active learning sampling methodologies and evaluates their impact on both synthetic and real insurance datasets.
The proposed approach samples informative and fair instances, achieving a good balance between model predictive performance and fairness.
arXiv Detail & Related papers (2021-12-17T12:07:04Z) - Single-Modal Entropy based Active Learning for Visual Question Answering [75.1682163844354]
We address Active Learning in the multi-modal setting of Visual Question Answering (VQA)
In light of the multi-modal inputs, image and question, we propose a novel method for effective sample acquisition.
Our novel idea is simple to implement, cost-efficient, and readily adaptable to other multi-modal tasks.
arXiv Detail & Related papers (2021-10-21T05:38:45Z) - Rebuilding Trust in Active Learning with Actionable Metrics [77.99796068970569]
Active Learning (AL) is an active domain of research, but is seldom used in the industry despite the pressing needs.
This is in part due to a misalignment of objectives, while research strives at getting the best results on selected datasets.
We present various actionable metrics to help rebuild trust of industrial practitioners in Active Learning.
arXiv Detail & Related papers (2020-12-18T09:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.