Distilled ChatGPT Topic & Sentiment Modeling with Applications in
Finance
- URL: http://arxiv.org/abs/2403.02185v1
- Date: Mon, 4 Mar 2024 16:27:21 GMT
- Title: Distilled ChatGPT Topic & Sentiment Modeling with Applications in
Finance
- Authors: Olivier Gandouet, Mouloud Belbahri, Armelle Jezequel, Yuriy Bodjov
- Abstract summary: ChatGPT is utilized to create streamlined models that generate easily interpretable features.
We detail a training approach that merges knowledge distillation and transfer learning, resulting in lightweight topic and sentiment classification models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, ChatGPT is utilized to create streamlined models that generate
easily interpretable features. These features are then used to evaluate
financial outcomes from earnings calls. We detail a training approach that
merges knowledge distillation and transfer learning, resulting in lightweight
topic and sentiment classification models without significant loss in accuracy.
These models are assessed through a dataset annotated by experts. The paper
also delves into two practical case studies, highlighting how the generated
features can be effectively utilized in quantitative investing scenarios.
Related papers
- Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability [1.8274323268621635]
Real Explainer (RealExp) is an interpretability method that decouples the Shapley Value into individual feature importance and feature correlation importance.
RealExp enhances interpretability by precisely quantifying both individual feature contributions and their interactions.
arXiv Detail & Related papers (2024-12-02T10:50:50Z) - The Extrapolation Power of Implicit Models [2.3526338188342653]
Implicit models are put to the test across various extrapolation scenarios: out-of-distribution, geographical, and temporal shifts.
Our experiments consistently demonstrate significant performance advantage with implicit models.
arXiv Detail & Related papers (2024-07-19T16:01:37Z) - One-Shot Open Affordance Learning with Foundation Models [54.15857111929812]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category.
We propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings.
Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data.
arXiv Detail & Related papers (2023-11-29T16:23:06Z) - On the Foundations of Shortcut Learning [20.53986437152018]
We study how predictivity and availability interact to shape models' feature use.
We find that linear models are relatively unbiased, but introducing a single hidden layer with ReLU or Tanh units yields a bias.
arXiv Detail & Related papers (2023-10-24T22:54:05Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Realised Volatility Forecasting: Machine Learning via Financial Word Embedding [4.2193475197905705]
This study develops a financial word embedding using 15 years of business news.
As an application, we incorporate this word embedding into a simple machine learning model to enhance the HAR model for forecasting realised volatility.
arXiv Detail & Related papers (2021-08-01T15:43:57Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.