WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model
for Financial Domain
- URL: http://arxiv.org/abs/2211.00083v1
- Date: Mon, 31 Oct 2022 18:35:18 GMT
- Title: WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model
for Financial Domain
- Authors: Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du,
Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang
- Abstract summary: We propose a novel domain specific Financial LANGuage model (FLANG)
It uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective.
Our models, code and benchmark data are publicly available on Github and Huggingface.
- Score: 42.093876880881886
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pre-trained language models have shown impressive performance on a variety of
tasks and domains. Previous research on financial language models usually
employs a generic training scheme to train standard model architectures,
without completely leveraging the richness of the financial data. We propose a
novel domain specific Financial LANGuage model (FLANG) which uses financial
keywords and phrases for better masking, together with span boundary objective
and in-filing objective. Additionally, the evaluation benchmarks in the field
have been limited. To this end, we contribute the Financial Language
Understanding Evaluation (FLUE), an open-source comprehensive suite of
benchmarks for the financial domain. These include new benchmarks across 5 NLP
tasks in financial domain as well as common benchmarks used in the previous
research. Experiments on these benchmarks suggest that our model outperforms
those in prior literature on a variety of NLP tasks. Our models, code and
benchmark data are publicly available on Github and Huggingface.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - SNFinLLM: Systematic and Nuanced Financial Domain Adaptation of Chinese Large Language Models [6.639972934967109]
Large language models (LLMs) have become powerful tools for advancing natural language processing applications in the financial industry.
We propose a novel large language model specifically designed for the Chinese financial domain, named SNFinLLM.
SNFinLLM excels in domain-specific tasks such as answering questions, summarizing financial research reports, analyzing sentiment, and executing financial calculations.
arXiv Detail & Related papers (2024-08-05T08:24:24Z) - A Large-Scale Evaluation of Speech Foundation Models [110.95827399522204]
We establish the Speech processing Universal PERformance Benchmark (SUPERB) to study the effectiveness of the foundation model paradigm for speech.
We propose a unified multi-tasking framework to address speech processing tasks in SUPERB using a frozen foundation model followed by task-specialized, lightweight prediction heads.
arXiv Detail & Related papers (2024-04-15T00:03:16Z) - Large Language Model Adaptation for Financial Sentiment Analysis [2.0499240875882]
Generalist language models tend to fall short in tasks specifically tailored for finance.
Two foundation models with less than 1.5B parameters have been adapted using a wide range of strategies.
We show that small LLMs have comparable performance to larger scale models, while being more efficient in terms of parameters and data.
arXiv Detail & Related papers (2024-01-26T11:04:01Z) - DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple
Experts Fine-tuning [74.99318727786337]
We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM)
We build a financial instruction-tuning dataset named DISC-FIN-SFT, including instruction samples of four categories (consulting, NLP tasks, computing and retrieval-augmented generation)
Evaluations conducted on multiple benchmarks demonstrate that our model performs better than baseline models in various financial scenarios.
arXiv Detail & Related papers (2023-10-23T11:33:41Z) - Is ChatGPT a Financial Expert? Evaluating Language Models on Financial
Natural Language Processing [22.754757518792395]
FinLMEval is a framework for Financial Language Model Evaluation.
This study compares the performance of encoder-only language models and the decoder-only language models.
arXiv Detail & Related papers (2023-10-19T11:43:15Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance [63.51545277822702]
PIXIU is a comprehensive framework including the first financial large language model (LLMs) based on fine-tuning LLaMA with instruction data.
We propose FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks.
We conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks.
arXiv Detail & Related papers (2023-06-08T14:20:29Z) - BloombergGPT: A Large Language Model for Finance [42.73350054822628]
We present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data.
We construct a 363 billion token dataset based on Bloomberg's extensive data sources, augmented with 345 billion tokens from general purpose datasets.
Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins.
arXiv Detail & Related papers (2023-03-30T17:30:36Z) - Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing [73.37262264915739]
We show that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains.
Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks.
arXiv Detail & Related papers (2020-07-31T00:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.