Capturing Row and Column Semantics in Transformer Based Question
Answering over Tables
- URL: http://arxiv.org/abs/2104.08303v1
- Date: Fri, 16 Apr 2021 18:22:30 GMT
- Title: Capturing Row and Column Semantics in Transformer Based Question
Answering over Tables
- Authors: Michael Glass, Mustafa Canim, Alfio Gliozzo, Saneem Chemmengath,
Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, Nicolas Rodolfo
Fauceglia
- Abstract summary: We show that one can achieve superior performance on table QA task without using any of these specialized pre-training techniques.
Experiments on recent benchmarks prove that the proposed methods can effectively locate cell values on tables (up to 98% Hit@1 accuracy on Wiki lookup questions)
- Score: 9.347393642549806
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transformer based architectures are recently used for the task of answering
questions over tables. In order to improve the accuracy on this task,
specialized pre-training techniques have been developed and applied on millions
of open-domain web tables. In this paper, we propose two novel approaches
demonstrating that one can achieve superior performance on table QA task
without even using any of these specialized pre-training techniques. The first
model, called RCI interaction, leverages a transformer based architecture that
independently classifies rows and columns to identify relevant cells. While
this model yields extremely high accuracy at finding cell values on recent
benchmarks, a second model we propose, called RCI representation, provides a
significant efficiency advantage for online QA systems over tables by
materializing embeddings for existing tables. Experiments on recent benchmarks
prove that the proposed methods can effectively locate cell values on tables
(up to ~98% Hit@1 accuracy on WikiSQL lookup questions). Also, the interaction
model outperforms the state-of-the-art transformer based approaches,
pre-trained on very large table corpora (TAPAS and TaBERT), achieving ~3.4% and
~18.86% additional precision improvement on the standard WikiSQL benchmark.
Related papers
- KET-QA: A Dataset for Knowledge Enhanced Table Question Answering [63.56707527868466]
We propose to use a knowledge base (KB) as the external knowledge source for TableQA.
Every question requires the integration of information from both the table and the sub-graph to be answered.
We design a retriever-reasoner structured pipeline model to extract pertinent information from the vast knowledge sub-graph.
arXiv Detail & Related papers (2024-05-13T18:26:32Z) - Retrieval-Based Transformer for Table Augmentation [14.460363647772745]
We introduce a novel approach toward automatic data wrangling.
We aim to address table augmentation tasks, including row/column population and data imputation.
Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models.
arXiv Detail & Related papers (2023-06-20T18:51:21Z) - TRUST: An Accurate and End-to-End Table structure Recognizer Using
Splitting-based Transformers [56.56591337457137]
We propose an accurate and end-to-end transformer-based table structure recognition method, referred to as TRUST.
Transformers are suitable for table structure recognition because of their global computations, perfect memory, and parallel computation.
We conduct experiments on several popular benchmarks including PubTabNet and SynthTable, our method achieves new state-of-the-art results.
arXiv Detail & Related papers (2022-08-31T08:33:36Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - Parameter-Efficient Abstractive Question Answering over Tables or Text [60.86457030988444]
A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries.
Memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables.
To avoid training such memory-hungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottle-neck layers between transformer layers.
arXiv Detail & Related papers (2022-04-07T10:56:29Z) - End-to-End Table Question Answering via Retrieval-Augmented Generation [19.89730342792824]
We introduce T-RAG, an end-to-end Table QA model, where a non-parametric dense vector index is fine-tuned jointly with BART, a parametric sequence-to-sequence model to generate answer tokens.
Given any natural language question, T-RAG utilizes a unified pipeline to automatically search through a table corpus to directly locate the correct answer from the table cells.
arXiv Detail & Related papers (2022-03-30T23:30:16Z) - CLTR: An End-to-End, Transformer-Based System for Cell Level Table
Retrieval and Table Question Answering [8.389189333083513]
We present the first end-to-end, transformer-based table question answering (QA) system.
It takes natural language questions and massive table corpus as inputs to retrieve the most relevant tables and locate the correct table cells to answer the question.
We introduce two new open-domain benchmarks, E2E_WTQ and E2E_GNQ, consisting of 2,005 natural language questions over 76,242 tables.
arXiv Detail & Related papers (2021-06-08T15:22:10Z) - Understanding tables with intermediate pre-training [11.96734018295146]
We adapt TAPAS, a table-based BERT model, to recognize entailment.
We evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency.
arXiv Detail & Related papers (2020-10-01T17:43:27Z) - AutoRC: Improving BERT Based Relation Classification Models via
Architecture Search [50.349407334562045]
BERT based relation classification (RC) models have achieved significant improvements over the traditional deep learning models.
No consensus can be reached on what is the optimal architecture.
We design a comprehensive search space for BERT based RC models and employ neural architecture search (NAS) method to automatically discover the design choices.
arXiv Detail & Related papers (2020-09-22T16:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.