SemEval-2021 Task 9: Fact Verification and Evidence Finding for Tabular
Data in Scientific Documents (SEM-TAB-FACTS)
- URL: http://arxiv.org/abs/2105.13995v1
- Date: Fri, 28 May 2021 17:21:11 GMT
- Title: SemEval-2021 Task 9: Fact Verification and Evidence Finding for Tabular
Data in Scientific Documents (SEM-TAB-FACTS)
- Authors: Nancy X. R. Wang, Diwakar Mahajan, Marina Danilevsk. Sara Rosenthal
- Abstract summary: SEM-TAB-FACTS featured two sub-tasks.
In sub-task A, the goal was to determine if a statement is supported, refuted or unknown in relation to a table.
In sub-task B, the focus was on identifying the specific cells of a table that provide evidence for the statement.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding tables is an important and relevant task that involves
understanding table structure as well as being able to compare and contrast
information within cells. In this paper, we address this challenge by
presenting a new dataset and tasks that addresses this goal in a shared task in
SemEval 2020 Task 9: Fact Verification and Evidence Finding for Tabular Data in
Scientific Documents (SEM-TAB-FACTS). Our dataset contains 981
manually-generated tables and an auto-generated dataset of 1980 tables
providing over 180K statement and over 16M evidence annotations. SEM-TAB-FACTS
featured two sub-tasks. In sub-task A, the goal was to determine if a statement
is supported, refuted or unknown in relation to a table. In sub-task B, the
focus was on identifying the specific cells of a table that provide evidence
for the statement. 69 teams signed up to participate in the task with 19
successful submissions to subtask A and 12 successful submissions to subtask B.
We present our results and main findings from the competition.
Related papers
- TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy [81.76462101465354]
We present a novel large vision-hugging model, TabPedia, equipped with a concept synergy mechanism.
This unified framework allows TabPedia to seamlessly integrate VTU tasks, such as table detection, table structure recognition, table querying, and table question answering.
To better evaluate the VTU task in real-world scenarios, we establish a new and comprehensive table VQA benchmark, ComTQA.
arXiv Detail & Related papers (2024-06-03T13:54:05Z) - TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios [52.73289223176475]
TableLLM is a robust large language model (LLM) with 13 billion parameters.
TableLLM is purpose-built for proficiently handling data manipulation tasks.
We have released the model checkpoint, source code, benchmarks, and a web application for user interaction.
arXiv Detail & Related papers (2024-03-28T11:21:12Z) - Wiki-TabNER:Advancing Table Interpretation Through Named Entity
Recognition [19.423556742293762]
We analyse a widely used benchmark dataset for evaluation of TI tasks.
To overcome this drawback, we construct and annotate a new more challenging dataset.
We propose a prompting framework for evaluating the newly developed large language models.
arXiv Detail & Related papers (2024-03-07T15:22:07Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - QTSumm: Query-Focused Summarization over Tabular Data [58.62152746690958]
People primarily consult tables to conduct data analysis or answer specific questions.
We define a new query-focused table summarization task, where text generation models have to perform human-like reasoning.
We introduce a new benchmark named QTSumm for this task, which contains 7,111 human-annotated query-summary pairs over 2,934 tables.
arXiv Detail & Related papers (2023-05-23T17:43:51Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - CTE: A Dataset for Contextualized Table Extraction [1.1859913430860336]
The dataset comprises 75k fully annotated pages of scientific papers, including more than 35k tables.
Data are gathered from PubMed Central, merging the information provided by annotations in the PubTables-1M and PubLayNet datasets.
The generated annotations can be used to develop end-to-end pipelines for various tasks, including document layout analysis, table detection, structure recognition, and functional analysis.
arXiv Detail & Related papers (2023-02-02T22:38:23Z) - Zero-Shot Information Extraction as a Unified Text-to-Triple Translation [56.01830747416606]
We cast a suite of information extraction tasks into a text-to-triple translation framework.
We formalize the task as a translation between task-specific input text and output triples.
We study the zero-shot performance of this framework on open information extraction.
arXiv Detail & Related papers (2021-09-23T06:54:19Z) - Volta at SemEval-2021 Task 9: Statement Verification and Evidence
Finding with Tables using TAPAS and Transfer Learning [19.286478269708592]
We present our systems to solve Task 9 of SemEval-2021: Statement Verification and Evidence Finding with Tables.
The task consists of two subtasks: (A) Given a table and a statement, predicting whether the table supports the statement and (B) Predicting which cells in the table provide evidence for/against the statement.
Our systems achieve an F1 score of 67.34 in subtask A three-way classification, 72.89 in subtask A two-way classification, and 62.95 in subtask B.
arXiv Detail & Related papers (2021-06-01T06:06:29Z) - Sattiy at SemEval-2021 Task 9: An Ensemble Solution for Statement
Verification and Evidence Finding with Tables [4.691435917434472]
This paper describes sattiy team's system in SemEval-2021 task 9: Statement Verification and Evidence Finding with Tables (SEM-TAB-FACT)
This competition aims to verify statements and to find evidence from tables for scientific articles.
In this paper, we exploited ensemble models of pre-trained language models over tables, TaPas and TaBERT, for Task A and adjust the result based on some rules extracted for Task B.
arXiv Detail & Related papers (2021-04-21T06:11:49Z) - BreakingBERT@IITK at SemEval-2021 Task 9 : Statement Verification and
Evidence Finding with Tables [1.78256232654567]
We tackle the problem of fact verification and evidence finding over tabular data.
We make a comparison of the baselines and state-of-the-art approaches over the given SemTabFact dataset.
We also propose a novel approach CellBERT to solve evidence finding as a form of the Natural Language Inference task.
arXiv Detail & Related papers (2021-04-07T11:41:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.