Textual Entailment for Effective Triple Validation in Object Prediction
- URL: http://arxiv.org/abs/2401.16293v1
- Date: Mon, 29 Jan 2024 16:50:56 GMT
- Title: Textual Entailment for Effective Triple Validation in Object Prediction
- Authors: Andr\'es Garc\'ia-Silva, Cristian Berr\'io, Jos\'e Manuel
G\'omez-P\'erez
- Abstract summary: We propose to use textual entailment to validate facts extracted from language models through cloze statements.
Our results show that triple validation based on textual entailment improves language model predictions in different training regimes.
- Score: 4.94309218465563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge base population seeks to expand knowledge graphs with facts that
are typically extracted from a text corpus. Recently, language models
pretrained on large corpora have been shown to contain factual knowledge that
can be retrieved using cloze-style strategies. Such approach enables zero-shot
recall of facts, showing competitive results in object prediction compared to
supervised baselines. However, prompt-based fact retrieval can be brittle and
heavily depend on the prompts and context used, which may produce results that
are unintended or hallucinatory.We propose to use textual entailment to
validate facts extracted from language models through cloze statements. Our
results show that triple validation based on textual entailment improves
language model predictions in different training regimes. Furthermore, we show
that entailment-based triple validation is also effective to validate candidate
facts extracted from other sources including existing knowledge graphs and text
passages where named entities are recognized.
Related papers
- ZeFaV: Boosting Large Language Models for Zero-shot Fact Verification [2.6874004806796523]
ZeFaV is a zero-shot based fact-checking verification framework to enhance the performance on fact verification task of large language models.
We conducted empirical experiments to evaluate our approach on two multi-hop fact-checking datasets including HoVer and FEVEROUS.
arXiv Detail & Related papers (2024-11-18T02:35:15Z) - Blending Reward Functions via Few Expert Demonstrations for Faithful and
Accurate Knowledge-Grounded Dialogue Generation [22.38338205905379]
We leverage reinforcement learning algorithms to overcome the above challenges by introducing a novel reward function.
Our reward function combines an accuracy metric and a faithfulness metric to provide a balanced quality judgment of generated responses.
arXiv Detail & Related papers (2023-11-02T02:42:41Z) - FactLLaMA: Optimizing Instruction-Following Language Models with
External Knowledge for Automated Fact-Checking [10.046323978189847]
We propose combining the power of instruction-following language models with external evidence retrieval to enhance fact-checking performance.
Our approach involves leveraging search engines to retrieve relevant evidence for a given input claim.
Then, we instruct-tune an open-sourced language model, called LLaMA, using this evidence, enabling it to predict the veracity of the input claim more accurately.
arXiv Detail & Related papers (2023-09-01T04:14:39Z) - The Short Text Matching Model Enhanced with Knowledge via Contrastive
Learning [8.350445155753167]
This paper proposes a short Text Matching model that combines contrastive learning and external knowledge.
To avoid noise, we use keywords as the main semantics of the original sentence to retrieve corresponding knowledge words in the knowledge base.
Our designed model achieves state-of-the-art performance on two publicly available Chinese Text Matching datasets.
arXiv Detail & Related papers (2023-04-08T03:24:05Z) - Context-faithful Prompting for Large Language Models [51.194410884263135]
Large language models (LLMs) encode parametric knowledge about world facts.
Their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks.
We assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention.
arXiv Detail & Related papers (2023-03-20T17:54:58Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - CoLAKE: Contextualized Language and Knowledge Embedding [81.90416952762803]
We propose the Contextualized Language and Knowledge Embedding (CoLAKE)
CoLAKE jointly learns contextualized representation for both language and knowledge with the extended objective.
We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks.
arXiv Detail & Related papers (2020-10-01T11:39:32Z) - How Context Affects Language Models' Factual Predictions [134.29166998377187]
We integrate information from a retrieval system with a pre-trained language model in a purely unsupervised way.
We report that augmenting pre-trained language models in this way dramatically improves performance and that the resulting system, despite being unsupervised, is competitive with a supervised machine reading baseline.
arXiv Detail & Related papers (2020-05-10T09:28:12Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Leveraging Declarative Knowledge in Text and First-Order Logic for
Fine-Grained Propaganda Detection [139.3415751957195]
We study the detection of propagandistic text fragments in news articles.
We introduce an approach to inject declarative knowledge of fine-grained propaganda techniques.
arXiv Detail & Related papers (2020-04-29T13:46:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.