AIT-QA: Question Answering Dataset over Complex Tables in the Airline
Industry
- URL: http://arxiv.org/abs/2106.12944v1
- Date: Thu, 24 Jun 2021 12:14:18 GMT
- Title: AIT-QA: Question Answering Dataset over Complex Tables in the Airline
Industry
- Authors: Yannis Katsis, Saneem Chemmengath, Vishwajeet Kumar, Samarth
Bharadwaj, Mustafa Canim, Michael Glass, Alfio Gliozzo, Feifei Pan, Jaydeep
Sen, Karthik Sankaranarayanan, Soumen Chakrabarti
- Abstract summary: We introduce the domain-specific Table QA dataset AIT-QA (Industry Table QA)
The dataset consists of 515 questions authored by human annotators on 116 tables extracted from public U.S. SEC filings.
We also provide annotations pertaining to the nature of questions, marking those that require hierarchical headers, domain-specific terminology, and paraphrased forms.
- Score: 30.330772077451048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in transformers have enabled Table Question Answering (Table
QA) systems to achieve high accuracy and SOTA results on open domain datasets
like WikiTableQuestions and WikiSQL. Such transformers are frequently
pre-trained on open-domain content such as Wikipedia, where they effectively
encode questions and corresponding tables from Wikipedia as seen in Table QA
dataset. However, web tables in Wikipedia are notably flat in their layout,
with the first row as the sole column header. The layout lends to a relational
view of tables where each row is a tuple. Whereas, tables in domain-specific
business or scientific documents often have a much more complex layout,
including hierarchical row and column headers, in addition to having
specialized vocabulary terms from that domain.
To address this problem, we introduce the domain-specific Table QA dataset
AIT-QA (Airline Industry Table QA). The dataset consists of 515 questions
authored by human annotators on 116 tables extracted from public U.S. SEC
filings (publicly available at: https://www.sec.gov/edgar.shtml) of major
airline companies for the fiscal years 2017-2019. We also provide annotations
pertaining to the nature of questions, marking those that require hierarchical
headers, domain-specific terminology, and paraphrased forms. Our zero-shot
baseline evaluation of three transformer-based SOTA Table QA methods - TaPAS
(end-to-end), TaBERT (semantic parsing-based), and RCI (row-column
encoding-based) - clearly exposes the limitation of these methods in this
practical setting, with the best accuracy at just 51.8\% (RCI). We also present
pragmatic table preprocessing steps used to pivot and project these complex
tables into a layout suitable for the SOTA Table QA models.
Related papers
- KET-QA: A Dataset for Knowledge Enhanced Table Question Answering [63.56707527868466]
We propose to use a knowledge base (KB) as the external knowledge source for TableQA.
Every question requires the integration of information from both the table and the sub-graph to be answered.
We design a retriever-reasoner structured pipeline model to extract pertinent information from the vast knowledge sub-graph.
arXiv Detail & Related papers (2024-05-13T18:26:32Z) - Augment before You Try: Knowledge-Enhanced Table Question Answering via
Table Expansion [57.53174887650989]
Table question answering is a popular task that assesses a model's ability to understand and interact with structured data.
Existing methods either convert both the table and external knowledge into text, which neglects the structured nature of the table.
We propose a simple yet effective method to integrate external information in a given table.
arXiv Detail & Related papers (2024-01-28T03:37:11Z) - MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering [61.48881995121938]
Real-world queries are complex in nature, often over multiple tables in a relational database or web page.
Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers.
arXiv Detail & Related papers (2023-05-22T08:25:15Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - Topic Transferable Table Question Answering [33.54533181098762]
Weakly-supervised table question-answering(TableQA) models have achieved state-of-art performance by using pre-trained BERT transformer to jointly encoding a question and a table to produce structured query for the question.
In practical settings TableQA systems are deployed over table corpora having topic and word distributions quite distinct from BERT's pretraining corpus.
We propose T3QA (Topic Transferable Table Question Answering) as a pragmatic adaptation framework for TableQA.
arXiv Detail & Related papers (2021-09-15T15:34:39Z) - HiTab: A Hierarchical Table Dataset for Question Answering and Natural
Language Generation [35.73434495391091]
Hierarchical tables challenge existing methods by hierarchical indexing, as well as implicit relationships of calculation and semantics.
This work presents HiTab, a free and open dataset for the research community to study question answering (QA) and natural language generation (NLG) over hierarchical tables.
arXiv Detail & Related papers (2021-08-15T10:14:21Z) - CLTR: An End-to-End, Transformer-Based System for Cell Level Table
Retrieval and Table Question Answering [8.389189333083513]
We present the first end-to-end, transformer-based table question answering (QA) system.
It takes natural language questions and massive table corpus as inputs to retrieve the most relevant tables and locate the correct table cells to answer the question.
We introduce two new open-domain benchmarks, E2E_WTQ and E2E_GNQ, consisting of 2,005 natural language questions over 76,242 tables.
arXiv Detail & Related papers (2021-06-08T15:22:10Z) - Capturing Row and Column Semantics in Transformer Based Question
Answering over Tables [9.347393642549806]
We show that one can achieve superior performance on table QA task without using any of these specialized pre-training techniques.
Experiments on recent benchmarks prove that the proposed methods can effectively locate cell values on tables (up to 98% Hit@1 accuracy on Wiki lookup questions)
arXiv Detail & Related papers (2021-04-16T18:22:30Z) - A Graph Representation of Semi-structured Data for Web Question
Answering [96.46484690047491]
We propose a novel graph representation of Web tables and lists based on a systematic categorization of the components in semi-structured data as well as their relations.
Our method improves F1 score by 3.90 points over the state-of-the-art baselines.
arXiv Detail & Related papers (2020-10-14T04:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.