Relation Extraction from Tables using Artificially Generated Metadata
- URL: http://arxiv.org/abs/2108.10750v1
- Date: Tue, 24 Aug 2021 14:06:17 GMT
- Title: Relation Extraction from Tables using Artificially Generated Metadata
- Authors: Gaurav singh, Siffi Singh, Joshua Wong, Amir Saffari
- Abstract summary: We propose methods to artificially create some of this metadata for synthetic tables.
This leads to an improvement of 9%-45% in F1 score, in absolute terms, over 2 datasets.
- Score: 4.8627638794427765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation Extraction (RE) from tables is the task of identifying relations
between pairs of columns. Generally, RE models for this task require labelled
tables for training. Luckily, labelled tables can also be generated
artificially from a Knowledge Graph (KG), which makes the cost to acquire them
much lower in comparison to manual annotations. However, these tables have one
drawback compared to real tables, which is that they lack associated metadata,
such as column-headers, captions, etc. This is because synthetic tables are
created out of KGs that do not store such metadata. Unfortunately, metadata can
provide strong signals for RE from tables. To address this issue, we propose
methods to artificially create some of this metadata for synthetic tables. We
then experiment with a RE model that uses artificial metadata as input. Our
empirical results show that this leads to an improvement of 9\%-45\% in F1
score, in absolute terms, over 2 tabular datasets.
Related papers
- Relational Deep Learning: Graph Representation Learning on Relational
Databases [69.7008152388055]
We introduce an end-to-end representation approach to learn on data laid out across multiple tables.
Message Passing Graph Neural Networks can then automatically learn across the graph to extract representations that leverage all data input.
arXiv Detail & Related papers (2023-12-07T18:51:41Z) - REaLTabFormer: Generating Realistic Relational and Tabular Data using
Transformers [0.0]
We introduce REaLTabFormer (Realistic and Tabular Transformer), a synthetic data generation model.
It first creates a parent table using an autoregressive GPT-2 model, then generates the relational dataset conditioned on the parent table using a sequence-to-sequence model.
Experiments using real-world datasets show that REaLTabFormer captures the relational structure better than a model baseline.
arXiv Detail & Related papers (2023-02-04T00:32:50Z) - ReasTAP: Injecting Table Reasoning Skills During Pre-training via
Synthetic Reasoning Examples [15.212332890570869]
We develop ReasTAP to show that high-level table reasoning skills can be injected into models during pre-training without a complex table-specific architecture design.
ReasTAP achieves new state-of-the-art performance on all benchmarks and delivers a significant improvement on low-resource setting.
arXiv Detail & Related papers (2022-10-22T07:04:02Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - SpreadsheetCoder: Formula Prediction from Semi-structured Context [70.41579328458116]
We propose a BERT-based model architecture to represent the tabular context in both row-based and column-based formats.
We train our model on a large dataset of spreadsheets, and demonstrate that SpreadsheetCoder achieves top-1 prediction accuracy of 42.51%.
Compared to the rule-based system, SpreadsheetCoder 82% assists more users in composing formulas on Google Sheets.
arXiv Detail & Related papers (2021-06-26T11:26:27Z) - TGRNet: A Table Graph Reconstruction Network for Table Structure
Recognition [76.06530816349763]
We propose an end-to-end trainable table graph reconstruction network (TGRNet) for table structure recognition.
Specifically, the proposed method has two main branches, a cell detection branch and a cell logical location branch, to jointly predict the spatial location and the logical location of different cells.
arXiv Detail & Related papers (2021-06-20T01:57:05Z) - GitTables: A Large-Scale Corpus of Relational Tables [3.1218214157681277]
We introduce GitTables, a corpus of 1M relational tables extracted from GitHub.
Analyses of GitTables show that its structure, content, and topical coverage differ significantly from existing table corpora.
We present three applications of GitTables, demonstrating its value for learned semantic type detection models, completion methods, and benchmarks for table-to-KG matching, data search, and preparation.
arXiv Detail & Related papers (2021-06-14T09:22:09Z) - Retrieving Complex Tables with Multi-Granular Graph Representation
Learning [20.72341939868327]
The task of natural language table retrieval seeks to retrieve semantically relevant tables based on natural language queries.
Existing learning systems treat tables as plain text based on the assumption that tables are structured as dataframes.
We propose Graph-based Table Retrieval (GTR), a generalizable NLTR framework with multi-granular graph representation learning.
arXiv Detail & Related papers (2021-05-04T20:19:03Z) - TabEAno: Table to Knowledge Graph Entity Annotation [7.451544182579802]
We propose a novel approach, namely TabEAno, to semantically annotate table rows toward knowledge graph entities.
We introduce a "two-cells" lookup strategy bases on the assumption that there is an existing logical relation occurring in the knowledge graph between the two closed cells in the same row of the table.
Despite the simplicity of the approach, TabEAno outperforms the state of the art approaches in the two standard datasets.
arXiv Detail & Related papers (2020-10-05T07:39:02Z) - GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing [117.98107557103877]
We present GraPPa, an effective pre-training approach for table semantic parsing.
We construct synthetic question-pairs over high-free tables via a synchronous context-free grammar.
To maintain the model's ability to represent real-world data, we also include masked language modeling.
arXiv Detail & Related papers (2020-09-29T08:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.