MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering
- URL: http://arxiv.org/abs/2305.12820v2
- Date: Wed, 24 May 2023 17:13:47 GMT
- Title: MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering
- Authors: Vaishali Pal, Andrew Yates, Evangelos Kanoulas, Maarten de Rijke
- Abstract summary: Real-world queries are complex in nature, often over multiple tables in a relational database or web page.
Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers.
- Score: 61.48881995121938
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in tabular question answering (QA) with large language models
are constrained in their coverage and only answer questions over a single
table. However, real-world queries are complex in nature, often over multiple
tables in a relational database or web page. Single table questions do not
involve common table operations such as set operations, Cartesian products
(joins), or nested queries. Furthermore, multi-table operations often result in
a tabular output, which necessitates table generation capabilities of tabular
QA models. To fill this gap, we propose a new task of answering questions over
multiple tables. Our model, MultiTabQA, not only answers questions over
multiple tables, but also generalizes to generate tabular answers. To enable
effective training, we build a pre-training dataset comprising of 132,645 SQL
queries and tabular answers. Further, we evaluate the generated tables by
introducing table-specific metrics of varying strictness assessing various
levels of granularity of the table structure. MultiTabQA outperforms
state-of-the-art single table QA models adapted to a multi-table QA setting by
finetuning on three datasets: Spider, Atis and GeoQuery.
Related papers
- KET-QA: A Dataset for Knowledge Enhanced Table Question Answering [63.56707527868466]
We propose to use a knowledge base (KB) as the external knowledge source for TableQA.
Every question requires the integration of information from both the table and the sub-graph to be answered.
We design a retriever-reasoner structured pipeline model to extract pertinent information from the vast knowledge sub-graph.
arXiv Detail & Related papers (2024-05-13T18:26:32Z) - Is Table Retrieval a Solved Problem? Exploring Join-Aware Multi-Table Retrieval [52.592071689901196]
We introduce a method that uncovers useful join relations for any query and database during table retrieval.
Our method outperforms the state-of-the-art approaches for table retrieval by up to 9.3% in F1 score and for end-to-end QA by up to 5.4% in accuracy.
arXiv Detail & Related papers (2024-04-15T15:55:01Z) - Augment before You Try: Knowledge-Enhanced Table Question Answering via
Table Expansion [57.53174887650989]
Table question answering is a popular task that assesses a model's ability to understand and interact with structured data.
Existing methods either convert both the table and external knowledge into text, which neglects the structured nature of the table.
We propose a simple yet effective method to integrate external information in a given table.
arXiv Detail & Related papers (2024-01-28T03:37:11Z) - TabIQA: Table Questions Answering on Business Document Images [3.9993134366218857]
This paper introduces a novel pipeline, named TabIQA, to answer questions about business document images.
TabIQA combines state-of-the-art deep learning techniques 1) to extract table content and structural information from images and 2) to answer various questions related to numerical data, text-based information, and complex queries from structured tables.
arXiv Detail & Related papers (2023-03-27T06:31:21Z) - ReasTAP: Injecting Table Reasoning Skills During Pre-training via
Synthetic Reasoning Examples [15.212332890570869]
We develop ReasTAP to show that high-level table reasoning skills can be injected into models during pre-training without a complex table-specific architecture design.
ReasTAP achieves new state-of-the-art performance on all benchmarks and delivers a significant improvement on low-resource setting.
arXiv Detail & Related papers (2022-10-22T07:04:02Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - AIT-QA: Question Answering Dataset over Complex Tables in the Airline
Industry [30.330772077451048]
We introduce the domain-specific Table QA dataset AIT-QA (Industry Table QA)
The dataset consists of 515 questions authored by human annotators on 116 tables extracted from public U.S. SEC filings.
We also provide annotations pertaining to the nature of questions, marking those that require hierarchical headers, domain-specific terminology, and paraphrased forms.
arXiv Detail & Related papers (2021-06-24T12:14:18Z) - CLTR: An End-to-End, Transformer-Based System for Cell Level Table
Retrieval and Table Question Answering [8.389189333083513]
We present the first end-to-end, transformer-based table question answering (QA) system.
It takes natural language questions and massive table corpus as inputs to retrieve the most relevant tables and locate the correct table cells to answer the question.
We introduce two new open-domain benchmarks, E2E_WTQ and E2E_GNQ, consisting of 2,005 natural language questions over 76,242 tables.
arXiv Detail & Related papers (2021-06-08T15:22:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.