Abstract, Rationale, Stance: A Joint Model for Scientific Claim
Verification
- URL: http://arxiv.org/abs/2110.15116v1
- Date: Mon, 13 Sep 2021 10:07:26 GMT
- Title: Abstract, Rationale, Stance: A Joint Model for Scientific Claim
Verification
- Authors: Zhiwei Zhang, Jiyi Li, Fumiyo Fukumoto, Yanming Ye
- Abstract summary: We propose an approach, named as ARSJoint, that jointly learns the modules for the three tasks with a machine reading comprehension framework.
The experimental results on the benchmark dataset SciFact show that our approach outperforms the existing works.
- Score: 18.330265729989843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific claim verification can help the researchers to easily find the
target scientific papers with the sentence evidence from a large corpus for the
given claim. Some existing works propose pipeline models on the three tasks of
abstract retrieval, rationale selection and stance prediction. Such works have
the problems of error propagation among the modules in the pipeline and lack of
sharing valuable information among modules. We thus propose an approach, named
as ARSJoint, that jointly learns the modules for the three tasks with a machine
reading comprehension framework by including claim information. In addition, we
enhance the information exchanges and constraints among tasks by proposing a
regularization term between the sentence attention scores of abstract retrieval
and the estimated outputs of rational selection. The experimental results on
the benchmark dataset SciFact show that our approach outperforms the existing
works.
Related papers
- GEGA: Graph Convolutional Networks and Evidence Retrieval Guided Attention for Enhanced Document-level Relation Extraction [15.246183329778656]
Document-level relation extraction (DocRE) aims to extract relations between entities from unstructured document text.
To overcome these challenges, we propose GEGA, a novel model for DocRE.
We evaluate the GEGA model on three widely used benchmark datasets: DocRED, Re-DocRED, and Revisit-DocRED.
arXiv Detail & Related papers (2024-07-31T07:15:33Z) - Synthesizing Scientific Summaries: An Extractive and Abstractive Approach [0.5904095466127044]
We propose a hybrid methodology for research paper summarisation.
We use two models based on unsupervised learning for the extraction stage and two transformer language models.
We find that using certain combinations of hyper parameters, it is possible for automated summarisation systems to exceed the abstractiveness of summaries written by humans.
arXiv Detail & Related papers (2024-07-29T08:21:42Z) - Plot Retrieval as an Assessment of Abstract Semantic Association [131.58819293115124]
Text pairs in Plot Retrieval have less word overlap and more abstract semantic association.
Plot Retrieval can be the benchmark for further research on the semantic association modeling ability of IR models.
arXiv Detail & Related papers (2023-11-03T02:02:43Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - ReSel: N-ary Relation Extraction from Scientific Text and Tables by
Learning to Retrieve and Select [53.071352033539526]
We study the problem of extracting N-ary relations from scientific articles.
Our proposed method ReSel decomposes this task into a two-stage procedure.
Our experiments on three scientific information extraction datasets show that ReSel outperforms state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-10-26T02:28:02Z) - Improving Multi-Document Summarization through Referenced Flexible
Extraction with Credit-Awareness [21.037841262371355]
A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input.
We present an extract-then-abstract Transformer framework to overcome the problem.
We propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle.
arXiv Detail & Related papers (2022-05-04T04:40:39Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - A Review on Fact Extraction and Verification [19.373340472113703]
We study the fact checking problem, which aims to identify the veracity of a given claim.
We focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset.
This task is essential and can be the building block of applications such as fake news detection and medical claim verification.
arXiv Detail & Related papers (2020-10-06T20:05:43Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.