A Novel DeBERTa-based Model for Financial Question Answering Task
- URL: http://arxiv.org/abs/2207.05875v1
- Date: Tue, 12 Jul 2022 22:34:39 GMT
- Title: A Novel DeBERTa-based Model for Financial Question Answering Task
- Authors: Yanbo J. Wang, Yuming Li, Hui Qin, Yuhang Guan and Sheng Chen
- Abstract summary: This research mainly focuses on the financial numerical reasoning dataset - FinQA.
In the shared task, the objective is to generate the reasoning program and the final answer according to the given financial report.
We obtain an execution accuracy of 68.99 and a program accuracy of 64.53, ranking No. 4 in the 2022 FinQA Challenge.
- Score: 9.083539882647928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a rising star in the field of natural language processing, question
answering systems (Q&A Systems) are widely used in all walks of life. Compared
with other scenarios, the applicationin financial scenario has strong
requirements in the traceability and interpretability of the Q&A systems. In
addition, since the demand for artificial intelligence technology has gradually
shifted from the initial computational intelligence to cognitive intelligence,
this research mainly focuses on the financial numerical reasoning dataset -
FinQA. In the shared task, the objective is to generate the reasoning program
and the final answer according to the given financial report containing text
and tables. We use the method based on DeBERTa pre-trained language model, with
additional optimization methods including multi-model fusion, training set
combination on this basis. We finally obtain an execution accuracy of 68.99 and
a program accuracy of 64.53, ranking No. 4 in the 2022 FinQA Challenge.
Related papers
- SNFinLLM: Systematic and Nuanced Financial Domain Adaptation of Chinese Large Language Models [6.639972934967109]
Large language models (LLMs) have become powerful tools for advancing natural language processing applications in the financial industry.
We propose a novel large language model specifically designed for the Chinese financial domain, named SNFinLLM.
SNFinLLM excels in domain-specific tasks such as answering questions, summarizing financial research reports, analyzing sentiment, and executing financial calculations.
arXiv Detail & Related papers (2024-08-05T08:24:24Z) - A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges [60.546677053091685]
Large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain.
We explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation.
We highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications.
arXiv Detail & Related papers (2024-06-15T16:11:35Z) - Case-Based Reasoning Approach for Solving Financial Question Answering [5.10832476049103]
FinQA introduced a numerical reasoning dataset for financial documents.
We propose a novel approach to tackle numerical reasoning problems using case based reasoning (CBR)
Our model retrieves relevant cases to address a given question, and then generates an answer based on the retrieved cases and contextual information.
arXiv Detail & Related papers (2024-05-18T10:06:55Z) - BizBench: A Quantitative Reasoning Benchmark for Business and Finance [7.4673182865000225]
BizBench is a benchmark for evaluating models' ability to reason about realistic financial problems.
We include three financially-themed code-generation tasks from newly collected and augmented QA data.
These tasks evaluate a model's financial background knowledge, ability to parse financial documents, and capacity to solve problems with code.
arXiv Detail & Related papers (2023-11-11T16:16:11Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - Lila: A Unified Benchmark for Mathematical Reasoning [59.97570380432861]
LILA is a unified mathematical reasoning benchmark consisting of 23 diverse tasks along four dimensions.
We construct our benchmark by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs.
We introduce BHASKARA, a general-purpose mathematical reasoning model trained on LILA.
arXiv Detail & Related papers (2022-10-31T17:41:26Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z) - A Robustly Optimized Long Text to Math Models for Numerical Reasoning On
FinQA [2.93888900363581]
FinQA challenge has been organized to strengthen the study on numerical reasoning.
Our approach achieves the 1st place in FinQA challenge, with 71.93% execution accuracy and 67.03% program accuracy.
arXiv Detail & Related papers (2022-06-29T12:10:18Z) - FinQA: A Dataset of Numerical Reasoning over Financial Data [52.7249610894623]
We focus on answering deep questions over financial data, aiming to automate the analysis of a large corpus of financial documents.
We propose a new large-scale dataset, FinQA, with Question-Answering pairs over Financial reports, written by financial experts.
The results demonstrate that popular, large, pre-trained models fall far short of expert humans in acquiring finance knowledge.
arXiv Detail & Related papers (2021-09-01T00:08:14Z) - TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
Textual Content in Finance [71.76018597965378]
We build a new large-scale Question Answering dataset containing both Tabular And Textual data, named TAT-QA.
We propose a novel QA model termed TAGOP, which is capable of reasoning over both tables and text.
arXiv Detail & Related papers (2021-05-17T06:12:06Z) - Logic-Guided Data Augmentation and Regularization for Consistent
Question Answering [55.05667583529711]
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions.
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.
arXiv Detail & Related papers (2020-04-21T17:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.