Few Shot Learning for Information Verification
- URL: http://arxiv.org/abs/2102.10956v1
- Date: Mon, 22 Feb 2021 12:56:12 GMT
- Title: Few Shot Learning for Information Verification
- Authors: Usama Khalid, Mirza Omer Beg
- Abstract summary: We aim to verify facts based on evidence selected from a list of articles taken from Wikipedia.
In this research, we aim to verify facts based on evidence selected from a list of articles taken from Wikipedia.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information verification is quite a challenging task, this is because many
times verifying a claim can require picking pieces of information from multiple
pieces of evidence which can have a hierarchy of complex semantic relations.
Previously a lot of researchers have mainly focused on simply concatenating
multiple evidence sentences to accept or reject claims. These approaches are
limited as evidence can contain hierarchical information and dependencies. In
this research, we aim to verify facts based on evidence selected from a list of
articles taken from Wikipedia. Pretrained language models such as XLNET are
used to generate meaningful representations and graph-based attention and
convolutions are used in such a way that the system requires little additional
training to learn to verify facts.
Related papers
- Contrastive Learning to Improve Retrieval for Real-world Fact Checking [84.57583869042791]
We present Contrastive Fact-Checking Reranker (CFR), an improved retriever for fact-checking complex claims.
We leverage the AVeriTeC dataset, which annotates subquestions for claims with human written answers from evidence documents.
We find a 6% improvement in veracity classification accuracy on the dataset.
arXiv Detail & Related papers (2024-10-07T00:09:50Z) - Fact or Fiction? Improving Fact Verification with Knowledge Graphs through Simplified Subgraph Retrievals [0.0]
We present efficient methods for verifying claims on a dataset where the evidence is in the form of structured knowledge graphs.
By simplifying the evidence retrieval process, we are able to construct models that both require less computational resources and achieve better test-set accuracy.
arXiv Detail & Related papers (2024-08-14T10:46:15Z) - Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables [22.18384189336634]
HeterFC is a word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information.
We perform information propagation via a relational graph neural network, interactions between claims and evidence.
We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval.
arXiv Detail & Related papers (2024-02-20T14:10:40Z) - EX-FEVER: A Dataset for Multi-hop Explainable Fact Verification [22.785622371421876]
We present a pioneering dataset for multi-hop explainable fact verification.
With over 60,000 claims involving 2-hop and 3-hop reasoning, each is created by summarizing and modifying information from hyperlinked Wikipedia documents.
We demonstrate a novel baseline system on our EX-FEVER dataset, showcasing document retrieval, explanation generation, and claim verification.
arXiv Detail & Related papers (2023-10-15T06:46:15Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - The KITMUS Test: Evaluating Knowledge Integration from Multiple Sources
in Natural Language Understanding Systems [87.3207729953778]
We evaluate state-of-the-art coreference resolution models on our dataset.
Several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time.
Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time.
arXiv Detail & Related papers (2022-12-15T23:26:54Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Graph-based Retrieval for Claim Verification over Cross-Document
Evidence [0.6853165736531939]
We conjecture that a graph-based approach can be beneficial to identify fragmented evidence.
We tested this hypothesis by building, over the whole corpus, a large graph that interconnects text portions by means of mentioned entities.
Our experiments show that leveraging on a graph structure is beneficial in identifying a reasonably small portion of passages related to a claim.
arXiv Detail & Related papers (2021-09-13T14:54:26Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - A Knowledge Enhanced Learning and Semantic Composition Model for Multi-Claim Fact Checking [18.395092826197267]
We propose an end-to-end knowledge enhanced learning and verification method for multi-claim fact checking.
Our method consists of two modules, KG-based learning enhancement and multi-claim semantic composition.
arXiv Detail & Related papers (2021-04-27T08:43:14Z) - HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification [74.66819506353086]
HoVer is a dataset for many-hop evidence extraction and fact verification.
It challenges models to extract facts from several Wikipedia articles that are relevant to a claim.
Most of the 3/4-hop claims are written in multiple sentences, which adds to the complexity of understanding long-range dependency relations.
arXiv Detail & Related papers (2020-11-05T20:33:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.