Enhancing Financial Sentiment Analysis via Retrieval Augmented Large
Language Models
- URL: http://arxiv.org/abs/2310.04027v2
- Date: Sat, 4 Nov 2023 13:44:46 GMT
- Title: Enhancing Financial Sentiment Analysis via Retrieval Augmented Large
Language Models
- Authors: Boyu Zhang, Hongyang Yang, Tianyu Zhou, Ali Babar, Xiao-Yang Liu
- Abstract summary: Large Language Models (LLMs) pre-trained on extensive corpora have demonstrated superior performance across various NLP tasks.
We introduce a retrieval-augmented LLMs framework for financial sentiment analysis.
Our approach achieves 15% to 48% performance gain in accuracy and F1 score.
- Score: 11.154814189699735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Financial sentiment analysis is critical for valuation and investment
decision-making. Traditional NLP models, however, are limited by their
parameter size and the scope of their training datasets, which hampers their
generalization capabilities and effectiveness in this field. Recently, Large
Language Models (LLMs) pre-trained on extensive corpora have demonstrated
superior performance across various NLP tasks due to their commendable
zero-shot abilities. Yet, directly applying LLMs to financial sentiment
analysis presents challenges: The discrepancy between the pre-training
objective of LLMs and predicting the sentiment label can compromise their
predictive performance. Furthermore, the succinct nature of financial news,
often devoid of sufficient context, can significantly diminish the reliability
of LLMs' sentiment analysis. To address these challenges, we introduce a
retrieval-augmented LLMs framework for financial sentiment analysis. This
framework includes an instruction-tuned LLMs module, which ensures LLMs behave
as predictors of sentiment labels, and a retrieval-augmentation module which
retrieves additional context from reliable external sources. Benchmarked
against traditional models and LLMs like ChatGPT and LLaMA, our approach
achieves 15\% to 48\% performance gain in accuracy and F1 score.
Related papers
- A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs [74.35290684163718]
A primary challenge in large language model (LLM) development is their onerous pre-training cost.
This paper explores a promising paradigm to improve LLM pre-training efficiency and quality by leveraging a small language model (SLM)
arXiv Detail & Related papers (2024-10-24T14:31:52Z) - Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge [84.34545223897578]
Despite their excellence in many domains, potential issues are under-explored, undermining their reliability and the scope of their utility.
We identify 12 key potential biases and propose a new automated bias quantification framework-CALM- which quantifies and analyzes each type of bias in LLM-as-a-Judge.
Our work highlights the need for stakeholders to address these issues and remind users to exercise caution in LLM-as-a-Judge applications.
arXiv Detail & Related papers (2024-10-03T17:53:30Z) - Financial Statement Analysis with Large Language Models [0.0]
We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them.
The model outperforms financial analysts in its ability to predict earnings changes directionally.
Our trading strategies based on GPT's predictions yield a higher Sharpe ratio and alphas than strategies based on other models.
arXiv Detail & Related papers (2024-07-25T08:36:58Z) - Large Language Model Adaptation for Financial Sentiment Analysis [2.0499240875882]
Generalist language models tend to fall short in tasks specifically tailored for finance.
Two foundation models with less than 1.5B parameters have been adapted using a wide range of strategies.
We show that small LLMs have comparable performance to larger scale models, while being more efficient in terms of parameters and data.
arXiv Detail & Related papers (2024-01-26T11:04:01Z) - Benchmarking LLMs via Uncertainty Quantification [91.72588235407379]
The proliferation of open-source Large Language Models (LLMs) has highlighted the urgent need for comprehensive evaluation methods.
We introduce a new benchmarking approach for LLMs that integrates uncertainty quantification.
Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs.
arXiv Detail & Related papers (2024-01-23T14:29:17Z) - A Comparative Analysis of Fine-Tuned LLMs and Few-Shot Learning of LLMs
for Financial Sentiment Analysis [0.0]
We employ two approaches: in-context learning and fine-tuning LLMs on a finance-domain dataset.
Our results demonstrate that fine-tuned smaller LLMs can achieve comparable performance to state-of-the-art fine-tuned LLMs.
There is no observed enhancement in performance for finance-domain sentiment analysis when the number of shots for in-context learning is increased.
arXiv Detail & Related papers (2023-12-14T08:13:28Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - Large Language Models in Finance: A Survey [12.243277149505364]
Large language models (LLMs) have opened new possibilities for artificial intelligence applications in finance.
Recent advances in large language models (LLMs) have opened new possibilities for artificial intelligence applications in finance.
arXiv Detail & Related papers (2023-09-28T06:04:04Z) - Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of
General-Purpose Large Language Models [18.212210748797332]
We introduce a simple yet effective instruction tuning approach to address these issues.
In the experiment, our approach outperforms state-of-the-art supervised sentiment analysis models.
arXiv Detail & Related papers (2023-06-22T03:56:38Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.