TURL: Table Understanding through Representation Learning
- URL: http://arxiv.org/abs/2006.14806v2
- Date: Thu, 3 Dec 2020 02:47:41 GMT
- Title: TURL: Table Understanding through Representation Learning
- Authors: Xiang Deng, Huan Sun, Alyssa Lees, You Wu, Cong Yu
- Abstract summary: TURL is a novel framework that introduces the pre-training/finetuning paradigm to relational Web tables.
During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner.
We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances.
- Score: 29.6016859927782
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Relational tables on the Web store a vast amount of knowledge. Owing to the
wealth of such tables, there has been tremendous progress on a variety of tasks
in the area of table understanding. However, existing work generally relies on
heavily-engineered task-specific features and model architectures. In this
paper, we present TURL, a novel framework that introduces the
pre-training/fine-tuning paradigm to relational Web tables. During
pre-training, our framework learns deep contextualized representations on
relational tables in an unsupervised manner. Its universal model design with
pre-trained representations can be applied to a wide range of tasks with
minimal task-specific fine-tuning. Specifically, we propose a structure-aware
Transformer encoder to model the row-column structure of relational tables, and
present a new Masked Entity Recovery (MER) objective for pre-training to
capture the semantics and knowledge in large-scale unlabeled data. We
systematically evaluate TURL with a benchmark consisting of 6 different tasks
for table understanding (e.g., relation extraction, cell filling). We show that
TURL generalizes well to all tasks and substantially outperforms existing
methods in almost all instances.
Related papers
- TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy [81.76462101465354]
We present a novel large vision-hugging model, TabPedia, equipped with a concept synergy mechanism.
This unified framework allows TabPedia to seamlessly integrate VTU tasks, such as table detection, table structure recognition, table querying, and table question answering.
To better evaluate the VTU task in real-world scenarios, we establish a new and comprehensive table VQA benchmark, ComTQA.
arXiv Detail & Related papers (2024-06-03T13:54:05Z) - TableLlama: Towards Open Large Generalist Models for Tables [22.56558262472516]
This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks.
We construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs.
We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge.
arXiv Detail & Related papers (2023-11-15T18:47:52Z) - MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering [61.48881995121938]
Real-world queries are complex in nature, often over multiple tables in a relational database or web page.
Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers.
arXiv Detail & Related papers (2023-05-22T08:25:15Z) - ReasTAP: Injecting Table Reasoning Skills During Pre-training via
Synthetic Reasoning Examples [15.212332890570869]
We develop ReasTAP to show that high-level table reasoning skills can be injected into models during pre-training without a complex table-specific architecture design.
ReasTAP achieves new state-of-the-art performance on all benchmarks and delivers a significant improvement on low-resource setting.
arXiv Detail & Related papers (2022-10-22T07:04:02Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - Table Pre-training: A Survey on Model Architectures, Pretraining
Objectives, and Downstream Tasks [37.35651138851127]
A flurry of table pre-training frameworks have been proposed following the success of text and images.
Table pre-training usually takes the form of table-text joint pre-training.
This survey aims to provide a comprehensive review of different model designs, pre-training objectives, and downstream tasks for table pre-training.
arXiv Detail & Related papers (2022-01-24T15:22:24Z) - Retrieving Complex Tables with Multi-Granular Graph Representation
Learning [20.72341939868327]
The task of natural language table retrieval seeks to retrieve semantically relevant tables based on natural language queries.
Existing learning systems treat tables as plain text based on the assumption that tables are structured as dataframes.
We propose Graph-based Table Retrieval (GTR), a generalizable NLTR framework with multi-granular graph representation learning.
arXiv Detail & Related papers (2021-05-04T20:19:03Z) - TUTA: Tree-based Transformers for Generally Structured Table
Pre-training [47.181660558590515]
Recent attempts on table understanding mainly focus on relational tables, yet overlook to other common table structures.
We propose TUTA, a unified pre-training architecture for understanding generally structured tables.
TUTA is highly effective, achieving state-of-the-art on five widely-studied datasets.
arXiv Detail & Related papers (2020-10-21T13:22:31Z) - A Graph Representation of Semi-structured Data for Web Question
Answering [96.46484690047491]
We propose a novel graph representation of Web tables and lists based on a systematic categorization of the components in semi-structured data as well as their relations.
Our method improves F1 score by 3.90 points over the state-of-the-art baselines.
arXiv Detail & Related papers (2020-10-14T04:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.