Beyond Classification: Financial Reasoning in State-of-the-Art Language
Models
- URL: http://arxiv.org/abs/2305.01505v2
- Date: Sun, 25 Jun 2023 18:06:25 GMT
- Title: Beyond Classification: Financial Reasoning in State-of-the-Art Language
Models
- Authors: Guijin Son, Hanearl Jung, Moonjeong Hahm, Keonju Na, Sol Jin
- Abstract summary: Large Language Models (LLMs) have demonstrated remarkable ability in complex multi-step reasoning tasks.
This research presents a comprehensive investigation into the potential application of LLMs in the financial domain.
The ability to generate coherent financial reasoning first emerges at 6B parameters, and continues to improve with better instruction-tuning or larger datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs), consisting of 100 billion or more parameters,
have demonstrated remarkable ability in complex multi-step reasoning tasks.
However, the application of such generic advancements has been limited to a few
fields, such as clinical or legal, with the field of financial reasoning
remaining largely unexplored. To the best of our knowledge, the ability of LLMs
to solve financial reasoning problems has never been dealt with, and whether it
can be performed at any scale remains unknown. To address this knowledge gap,
this research presents a comprehensive investigation into the potential
application of LLMs in the financial domain. The investigation includes a
detailed exploration of a range of subjects, including task formulation,
synthetic data generation, prompting methods, and evaluation capability.
Furthermore, the study benchmarks various GPT variants with parameter scales
ranging from 2.8B to 13B, with and without instruction tuning, on diverse
dataset sizes. By analyzing the results, we reveal that the ability to generate
coherent financial reasoning first emerges at 6B parameters, and continues to
improve with better instruction-tuning or larger datasets. Additionally, the
study provides a publicly accessible dataset named sFIOG (Synthetic-Financial
Investment Opinion Generation), consisting of 11,802 synthetic investment
thesis samples, to support further research in the field of financial
reasoning. Overall, this research seeks to contribute to the understanding of
the efficacy of language models in the field of finance, with a particular
emphasis on their ability to engage in sophisticated reasoning and analysis
within the context of investment decision-making.
Related papers
- Evaluation of OpenAI o1: Opportunities and Challenges of AGI [112.0812059747033]
o1-preview demonstrated remarkable capabilities, often achieving human-level or superior performance.
The model excelled in tasks requiring intricate reasoning and knowledge integration across various fields.
Overall results indicate significant progress towards artificial general intelligence.
arXiv Detail & Related papers (2024-09-27T06:57:00Z) - A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges [60.546677053091685]
Large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain.
We explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation.
We highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications.
arXiv Detail & Related papers (2024-06-15T16:11:35Z) - AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework [48.3060010653088]
We release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data.
We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task.
arXiv Detail & Related papers (2024-03-19T09:45:33Z) - FinDABench: Benchmarking Financial Data Analysis Ability of Large Language Models [26.99936434072108]
textttFinDABench is a benchmark designed to evaluate the financial data analysis capabilities of Large Language Models.
textttFinDABench aims to provide a measure for in-depth analysis of LLM abilities.
arXiv Detail & Related papers (2024-01-01T15:26:23Z) - BizBench: A Quantitative Reasoning Benchmark for Business and Finance [7.4673182865000225]
BizBench is a benchmark for evaluating models' ability to reason about realistic financial problems.
We include three financially-themed code-generation tasks from newly collected and augmented QA data.
These tasks evaluate a model's financial background knowledge, ability to parse financial documents, and capacity to solve problems with code.
arXiv Detail & Related papers (2023-11-11T16:16:11Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - InvestLM: A Large Language Model for Investment using Financial Domain
Instruction Tuning [19.22852919096857]
We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023)
Inspired by less-is-more-for-alignment, we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics.
InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions.
arXiv Detail & Related papers (2023-09-15T02:59:31Z) - SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models [70.5763210869525]
We introduce an expansive benchmark suite SciBench for Large Language Model (LLM)
SciBench contains a dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains.
The results reveal that the current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 43.22%.
arXiv Detail & Related papers (2023-07-20T07:01:57Z) - PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance [63.51545277822702]
PIXIU is a comprehensive framework including the first financial large language model (LLMs) based on fine-tuning LLaMA with instruction data.
We propose FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks.
We conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks.
arXiv Detail & Related papers (2023-06-08T14:20:29Z) - FinQA: A Dataset of Numerical Reasoning over Financial Data [52.7249610894623]
We focus on answering deep questions over financial data, aiming to automate the analysis of a large corpus of financial documents.
We propose a new large-scale dataset, FinQA, with Question-Answering pairs over Financial reports, written by financial experts.
The results demonstrate that popular, large, pre-trained models fall far short of expert humans in acquiring finance knowledge.
arXiv Detail & Related papers (2021-09-01T00:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.