TableQAKit: A Comprehensive and Practical Toolkit for Table-based
Question Answering
- URL: http://arxiv.org/abs/2310.15075v1
- Date: Mon, 23 Oct 2023 16:33:23 GMT
- Title: TableQAKit: A Comprehensive and Practical Toolkit for Table-based
Question Answering
- Authors: Fangyu Lei, Tongxu Luo, Pengqi Yang, Weihao Liu, Hanwen Liu, Jiahe
Lei, Yiming Huang, Yifan Wei, Shizhu He, Jun Zhao, Kang Liu
- Abstract summary: TableQAKit is the first comprehensive toolkit designed specifically for TableQA.
TableQAKit is open-source with an interactive interface that includes visual operations, and comprehensive data for ease of use.
- Score: 23.412691101965414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Table-based question answering (TableQA) is an important task in natural
language processing, which requires comprehending tables and employing various
reasoning ways to answer the questions. This paper introduces TableQAKit, the
first comprehensive toolkit designed specifically for TableQA. The toolkit
designs a unified platform that includes plentiful TableQA datasets and
integrates popular methods of this task as well as large language models
(LLMs). Users can add their datasets and methods according to the friendly
interface. Also, pleasantly surprised using the modules in this toolkit
achieves new SOTA on some datasets. Finally, \tableqakit{} also provides an
LLM-based TableQA Benchmark for evaluating the role of LLMs in TableQA.
TableQAKit is open-source with an interactive interface that includes visual
operations, and comprehensive data for ease of use.
Related papers
- TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy [51.23025356179886]
We present a novel large vision-hugging model, TabPedia, equipped with a concept synergy mechanism.
This unified framework allows TabPedia to seamlessly integrate VTU tasks, such as table detection, table structure recognition, table querying, and table question answering.
We establish a new and comprehensive table VQA benchmark, ComTQA, featuring approximately 9,000 QA pairs.
arXiv Detail & Related papers (2024-06-03T13:54:05Z) - KET-QA: A Dataset for Knowledge Enhanced Table Question Answering [63.56707527868466]
We propose to use a knowledge base (KB) as the external knowledge source for TableQA.
Every question requires the integration of information from both the table and the sub-graph to be answered.
We design a retriever-reasoner structured pipeline model to extract pertinent information from the vast knowledge sub-graph.
arXiv Detail & Related papers (2024-05-13T18:26:32Z) - TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table Domains [4.828743805126944]
This paper establishes a benchmark for table visual question answering, referred to as the TableVQA-Bench.
It is important to note that existing datasets have not incorporated images or QA pairs, which are two crucial components of TableVQA.
arXiv Detail & Related papers (2024-04-30T02:05:18Z) - Large Language Model for Table Processing: A Survey [9.144614058716083]
Large Language Models (LLMs) offers significant public benefits, garnering interest from academia and industry.
Tables typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet calculations, and generating reports from web tables.
This survey provides an extensive overview of table tasks, encompassing not only the traditional areas like table question answering (Table QA) and fact verification, but also newly emphasized aspects such as table manipulation and advanced table data analysis.
arXiv Detail & Related papers (2024-02-04T00:47:53Z) - Augment before You Try: Knowledge-Enhanced Table Question Answering via
Table Expansion [57.53174887650989]
Table question answering is a popular task that assesses a model's ability to understand and interact with structured data.
Existing methods either convert both the table and external knowledge into text, which neglects the structured nature of the table.
We propose a simple yet effective method to integrate external information in a given table.
arXiv Detail & Related papers (2024-01-28T03:37:11Z) - TAP4LLM: Table Provider on Sampling, Augmenting, and Packing
Semi-structured Data for Large Language Model Reasoning [58.11442663694328]
We propose TAP4LLM as a versatile pre-processing toolbox to generate table prompts.
In each module, we collect and design several common methods for usage in various scenarios.
arXiv Detail & Related papers (2023-12-14T15:37:04Z) - RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated
Adversarial Perturbations [13.900589860309488]
RobuT builds upon existing Table QA datasets (WTQ, Wiki-Weak, and SQA)
Our results indicate that both state-of-the-art Table QA models and large language models (e.g., GPT-3) with few-shot learning falter in these adversarial sets.
We propose to address this problem by using large language models to generate adversarial examples to enhance training.
arXiv Detail & Related papers (2023-06-25T19:23:21Z) - MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering [61.48881995121938]
Real-world queries are complex in nature, often over multiple tables in a relational database or web page.
Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers.
arXiv Detail & Related papers (2023-05-22T08:25:15Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.