Mapping and Cleaning Open Commonsense Knowledge Bases with Generative
Translation
- URL: http://arxiv.org/abs/2306.12766v1
- Date: Thu, 22 Jun 2023 09:42:54 GMT
- Title: Mapping and Cleaning Open Commonsense Knowledge Bases with Generative
Translation
- Authors: Julien Romero, Simon Razniewski
- Abstract summary: In particular, open information extraction (OpenIE) is often used to induce structure from a text.
OpenIEs contain an open-ended, non-canonicalized set of relations, making the extracted knowledge's downstream exploitation harder.
We propose approaching the problem by generative translation, i.e., by training a language model to generate fixed- assertions from open ones.
- Score: 14.678465723838599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structured knowledge bases (KBs) are the backbone of many
know\-ledge-intensive applications, and their automated construction has
received considerable attention. In particular, open information extraction
(OpenIE) is often used to induce structure from a text. However, although it
allows high recall, the extracted knowledge tends to inherit noise from the
sources and the OpenIE algorithm. Besides, OpenIE tuples contain an open-ended,
non-canonicalized set of relations, making the extracted knowledge's downstream
exploitation harder. In this paper, we study the problem of mapping an open KB
into the fixed schema of an existing KB, specifically for the case of
commonsense knowledge. We propose approaching the problem by generative
translation, i.e., by training a language model to generate fixed-schema
assertions from open ones. Experiments show that this approach occupies a sweet
spot between traditional manual, rule-based, or classification-based
canonicalization and purely generative KB construction like COMET. Moreover, it
produces higher mapping accuracy than the former while avoiding the
association-based noise of the latter.
Related papers
- STACKFEED: Structured Textual Actor-Critic Knowledge Base Editing with FeedBack [9.82445545347097]
We introduce FEED, a novel Structured Textual Actor-Critic Knowledge base editing with FEEDback approach.
FEED iteratively refines the KB based on expert feedback using a multi-actor, centralized critic reinforcement learning framework.
Experimental results show that FEED significantly improves quality and RAG system performance, enhancing accuracy by up to 8% over baselines.
arXiv Detail & Related papers (2024-10-14T14:56:01Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Open Knowledge Base Canonicalization with Multi-task Unlearning [19.130159457887]
MulCanon is a multi-task unlearning framework to tackle machine unlearning problem in OKB canonicalization.
A thorough experimental study on popular OKB canonicalization datasets validates that MulCanon achieves advanced machine unlearning effects.
arXiv Detail & Related papers (2023-10-25T07:13:06Z) - KnowledGPT: Enhancing Large Language Models with Retrieval and Storage
Access on Knowledge Bases [55.942342665806656]
KnowledGPT is a comprehensive framework to bridge large language models with various knowledge bases.
The retrieval process employs the program of thought prompting, which generates search language for KBs in code format.
KnowledGPT offers the capability to store knowledge in a personalized KB, catering to individual user demands.
arXiv Detail & Related papers (2023-08-17T13:07:00Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog [12.081212540168055]
We present a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance.
In line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources.
We demonstrate that our model is robust to perturbations to knowledge modality (source of information) and that it can fuse information from structured as well as unstructured knowledge to generate responses.
arXiv Detail & Related papers (2022-10-13T18:49:59Z) - BERT-based knowledge extraction method of unstructured domain text [0.6445605125467573]
This paper proposes a knowledge extraction method based on BERT.
It converts the domain knowledge points into question and answer pairs and uses the text around the answer in documents as the context.
It is used to directly extract knowledge points from more insurance clauses.
arXiv Detail & Related papers (2021-03-01T03:24:35Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.