FinTagging: An LLM-ready Benchmark for Extracting and Structuring Financial Information
- URL: http://arxiv.org/abs/2505.20650v1
- Date: Tue, 27 May 2025 02:55:53 GMT
- Title: FinTagging: An LLM-ready Benchmark for Extracting and Structuring Financial Information
- Authors: Yan Wang, Yang Ren, Lingfei Qian, Xueqing Peng, Keyi Wang, Yi Han, Dongji Feng, Xiao-Yang Liu, Jimin Huang, Qianqian Xie,
- Abstract summary: We introduce FinTagging, the first full-scope, table-aware benchmark designed to evaluate the structured information extraction and semantic alignment capabilities of large language models (LLMs)<n>Unlike prior benchmarks that oversimplify tagging as flat multi-class classification and focus solely on narrative text, FinTagging decomposes the tagging problem into two subtasks: FinNI for financial entity extraction and FinCL for taxonomy-driven concept alignment.<n>It requires models to jointly extract facts and align them with the full 10k+ US- taxonomy across both unstructured text and structured tables, enabling realistic, fine-grained evaluation
- Score: 18.75906880569719
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce FinTagging, the first full-scope, table-aware XBRL benchmark designed to evaluate the structured information extraction and semantic alignment capabilities of large language models (LLMs) in the context of XBRL-based financial reporting. Unlike prior benchmarks that oversimplify XBRL tagging as flat multi-class classification and focus solely on narrative text, FinTagging decomposes the XBRL tagging problem into two subtasks: FinNI for financial entity extraction and FinCL for taxonomy-driven concept alignment. It requires models to jointly extract facts and align them with the full 10k+ US-GAAP taxonomy across both unstructured text and structured tables, enabling realistic, fine-grained evaluation. We assess a diverse set of LLMs under zero-shot settings, systematically analyzing their performance on both subtasks and overall tagging accuracy. Our results reveal that, while LLMs demonstrate strong generalization in information extraction, they struggle with fine-grained concept alignment, particularly in disambiguating closely related taxonomy entries. These findings highlight the limitations of existing LLMs in fully automating XBRL tagging and underscore the need for improved semantic reasoning and schema-aware modeling to meet the demands of accurate financial disclosure. Code is available at our GitHub repository and data is at our Hugging Face repository.
Related papers
- FAITH: A Framework for Assessing Intrinsic Tabular Hallucinations in finance [0.06597195879147556]
Hallucination remains a critical challenge for deploying Large Language Models (LLMs) in finance.<n>We develop a rigorous and scalable framework for evaluating intrinsic hallucinations in financial LLMs.<n>Our work serves as a critical step toward building more trustworthy and reliable financial Generative AI systems.
arXiv Detail & Related papers (2025-08-07T09:37:14Z) - Representation Learning of Limit Order Book: A Comprehensive Study and Benchmarking [3.94375691568608]
Limit Order Book (LOB) provides a fine-grained view of market dynamics.<n>Existing approaches often tightly couple representation learning with specific downstream tasks in an end-to-end manner.<n>We introduce LOBench, a standardized benchmark with real China A-share market data, offering curated datasets, unified preprocessing, consistent evaluation metrics, and strong baselines.
arXiv Detail & Related papers (2025-05-04T15:00:00Z) - Latent Factor Models Meets Instructions: Goal-conditioned Latent Factor Discovery without Task Supervision [50.45597801390757]
Instruct-LF is a goal-oriented latent factor discovery system.<n>It integrates instruction-following ability with statistical models to handle noisy datasets.
arXiv Detail & Related papers (2025-02-21T02:03:08Z) - KG-CF: Knowledge Graph Completion with Context Filtering under the Guidance of Large Language Models [55.39134076436266]
KG-CF is a framework tailored for ranking-based knowledge graph completion tasks.<n> KG-CF leverages LLMs' reasoning abilities to filter out irrelevant contexts, achieving superior results on real-world datasets.
arXiv Detail & Related papers (2025-01-06T01:52:15Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - NIFTY Financial News Headlines Dataset [14.622656548420073]
The NIFTY Financial News Headlines dataset is designed to facilitate and advance research in financial market forecasting using large language models (LLMs)
This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback) to align LLMs via rejection sampling and reward modeling.
arXiv Detail & Related papers (2024-05-16T01:09:33Z) - Parameter-Efficient Instruction Tuning of Large Language Models For Extreme Financial Numeral Labelling [29.84946857859386]
We study the problem of automatically annotating relevant numerals occurring in the financial documents with their corresponding tags.
We propose a parameter efficient solution for the task using LoRA.
Our proposed model, FLAN-FinXC, achieves new state-of-the-art performances on both the datasets.
arXiv Detail & Related papers (2024-05-03T16:41:36Z) - SEED-Bench-2: Benchmarking Multimodal Large Language Models [67.28089415198338]
Multimodal large language models (MLLMs) have recently demonstrated exceptional capabilities in generating not only texts but also images given interleaved multimodal inputs.
SEED-Bench-2 comprises 24K multiple-choice questions with accurate human annotations, which spans 27 dimensions.
We evaluate the performance of 23 prominent open-source MLLMs and summarize valuable observations.
arXiv Detail & Related papers (2023-11-28T05:53:55Z) - Data-Centric Financial Large Language Models [27.464319154543173]
Large language models (LLMs) show promise for natural language tasks but struggle when applied directly to complex domains like finance.
We propose a data-centric approach to enable LLMs to better handle financial tasks.
arXiv Detail & Related papers (2023-10-07T04:53:31Z) - Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes [54.13559879916708]
EVAPORATE is a prototype system powered by large language models (LLMs)<n>Code synthesis is cheap, but far less accurate than directly processing each document with the LLM.<n>We propose an extended code implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction.
arXiv Detail & Related papers (2023-04-19T06:00:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.