FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for
Large Language Models
- URL: http://arxiv.org/abs/2308.09975v1
- Date: Sat, 19 Aug 2023 10:38:00 GMT
- Title: FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for
Large Language Models
- Authors: Liwen Zhang, Weige Cai, Zhaowei Liu, Zhi Yang, Wei Dai, Yujie Liao,
Qianru Qin, Yifei Li, Xingyu Liu, Zhiqiang Liu, Zhoufan Zhu, Anbo Wu, Xin Guo
and Yun Chen
- Abstract summary: FinEval is a benchmark for the financial domain knowledge in the large language models (LLMs)
FinEval employs a range of prompt types, including zero-shot and few-shot prompts, as well as answer-only and chain-of-thought prompts.
The results show that only GPT-4 achieved an accuracy close to 70% in different prompt settings.
- Score: 25.137098233579255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have demonstrated exceptional performance in
various natural language processing tasks, yet their efficacy in more
challenging and domain-specific tasks remains largely unexplored. This paper
presents FinEval, a benchmark specifically designed for the financial domain
knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice
questions covering Finance, Economy, Accounting, and Certificate. It includes
4,661 questions spanning 34 different academic subjects. To ensure a
comprehensive model performance evaluation, FinEval employs a range of prompt
types, including zero-shot and few-shot prompts, as well as answer-only and
chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs
on FinEval, the results show that only GPT-4 achieved an accuracy close to 70%
in different prompt settings, indicating significant growth potential for LLMs
in the financial domain knowledge. Our work offers a more comprehensive
financial knowledge evaluation benchmark, utilizing data of mock exams and
covering a wide range of evaluated LLMs.
Related papers
- Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications [90.67346776473241]
Large language models (LLMs) have advanced financial applications, yet they often lack sufficient financial knowledge and struggle with tasks involving multi-modal inputs like tables and time series data.
We introduce textitOpen-FinLLMs, a series of Financial LLMs that embed comprehensive financial knowledge into text, tables, and time-series data.
We also present FinLLaVA, a multimodal LLM trained with 1.43M image-text instructions to handle complex financial data types.
arXiv Detail & Related papers (2024-08-20T16:15:28Z) - CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models [61.324062412648075]
CFinBench is an evaluation benchmark for assessing the financial knowledge of large language models (LLMs) under Chinese context.
It comprises 99,100 questions spanning 43 second-level categories with 3 question types: single-choice, multiple-choice and judgment.
The results show that GPT4 and some Chinese-oriented models lead the benchmark, with the highest average accuracy being 60.16%.
arXiv Detail & Related papers (2024-07-02T14:34:36Z) - Financial Knowledge Large Language Model [4.599537455808687]
We introduce IDEA-FinBench, an evaluation benchmark for assessing financial knowledge in large language models (LLMs)
We propose IDEA-FinKER, a framework designed to facilitate the rapid adaptation of general LLMs to the financial domain.
Finally, we present IDEA-FinQA, a financial question-answering system powered by LLMs.
arXiv Detail & Related papers (2024-06-29T08:26:49Z) - FinBen: A Holistic Financial Benchmark for Large Language Models [75.09474986283394]
FinBen is the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks.
FinBen offers several key innovations: a broader range of tasks and datasets, the first evaluation of stock trading, novel agent and Retrieval-Augmented Generation (RAG) evaluation, and three novel open-source evaluation datasets for text summarization, question answering, and stock trading.
arXiv Detail & Related papers (2024-02-20T02:16:16Z) - A Survey of Large Language Models in Finance (FinLLMs) [10.195778659105626]
Large Language Models (LLMs) have shown remarkable capabilities across a wide variety of Natural Language Processing (NLP) tasks.
This survey provides a comprehensive overview of FinLLMs, including their history, techniques, performance, and opportunities and challenges.
To support AI research in finance, we compile a collection of accessible datasets and evaluation benchmarks on GitHub.
arXiv Detail & Related papers (2024-02-04T02:06:57Z) - Revolutionizing Finance with LLMs: An Overview of Applications and
Insights [47.11391223936608]
Large Language Models (LLMs) like ChatGPT have seen considerable advancements and have been applied in diverse fields.
These models are being utilized for automating financial report generation, forecasting market trends, analyzing investor sentiment, and offering personalized financial advice.
arXiv Detail & Related papers (2024-01-22T01:06:17Z) - Is ChatGPT a Financial Expert? Evaluating Language Models on Financial
Natural Language Processing [22.754757518792395]
FinLMEval is a framework for Financial Language Model Evaluation.
This study compares the performance of encoder-only language models and the decoder-only language models.
arXiv Detail & Related papers (2023-10-19T11:43:15Z) - CMMLU: Measuring massive multitask language understanding in Chinese [133.70911295934746]
This paper introduces a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities.
CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
arXiv Detail & Related papers (2023-06-15T15:49:51Z) - PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance [63.51545277822702]
PIXIU is a comprehensive framework including the first financial large language model (LLMs) based on fine-tuning LLaMA with instruction data.
We propose FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks.
We conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks.
arXiv Detail & Related papers (2023-06-08T14:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.