CO-NNECT: A Framework for Revealing Commonsense Knowledge Paths as
Explicitations of Implicit Knowledge in Texts
- URL: http://arxiv.org/abs/2105.03157v1
- Date: Fri, 7 May 2021 10:43:43 GMT
- Title: CO-NNECT: A Framework for Revealing Commonsense Knowledge Paths as
Explicitations of Implicit Knowledge in Texts
- Authors: Maria Becker, Katharina Korfhage, Debjit Paul, Anette Frank
- Abstract summary: We leverage commonsense knowledge in form of knowledge paths to establish connections between sentences.
These connections can be direct (singlehop paths) or require intermediate concepts (multihop paths)
- Score: 12.94206336329289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we leverage commonsense knowledge in form of knowledge paths to
establish connections between sentences, as a form of explicitation of implicit
knowledge. Such connections can be direct (singlehop paths) or require
intermediate concepts (multihop paths). To construct such paths we combine two
model types in a joint framework we call Co-nnect: a relation classifier that
predicts direct connections between concepts; and a target prediction model
that generates target or intermediate concepts given a source concept and a
relation, which we use to construct multihop paths. Unlike prior work that
relies exclusively on static knowledge sources, we leverage language models
finetuned on knowledge stored in ConceptNet, to dynamically generate knowledge
paths, as explanations of implicit knowledge that connects sentences in texts.
As a central contribution we design manual and automatic evaluation settings
for assessing the quality of the generated paths. We conduct evaluations on two
argumentative datasets and show that a combination of the two model types
generates meaningful, high-quality knowledge paths between sentences that
reveal implicit knowledge conveyed in text.
Related papers
- DiffuCOMET: Contextual Commonsense Knowledge Diffusion [29.23102821128395]
In this work, we develop a series of knowledge models, DiffuCOMET, that leverage diffusion to learn to reconstruct the implicit semantic connections between narrative contexts and relevant commonsense knowledge.
To evaluate DiffuCOMET, we introduce new metrics for commonsense inference that more closely measure knowledge diversity and contextual relevance.
Our results on two different benchmarks, ComFact and WebNLG+, show that knowledge generated by DiffuCOMET achieves a better trade-off between commonsense diversity, contextual relevance and alignment to known gold references.
arXiv Detail & Related papers (2024-02-26T20:35:34Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning [10.839645156881573]
We introduce a novel semi-structured prompting approach that seamlessly integrates the model's parametric memory with unstructured knowledge from text documents and structured knowledge from knowledge graphs.
Experimental results on open-domain multi-hop question answering datasets demonstrate that our prompting method significantly surpasses existing techniques.
arXiv Detail & Related papers (2023-11-14T19:53:53Z) - Contextual Knowledge Learning For Dialogue Generation [13.671946960656467]
We present a novel approach to context and knowledge weighting as an integral part of model training.
We guide the model training through a Contextual Knowledge Learning process which involves Latent Vectors for context and knowledge.
arXiv Detail & Related papers (2023-05-29T16:54:10Z) - Commonsense and Named Entity Aware Knowledge Grounded Dialogue
Generation [20.283091595536835]
We present a novel open-domain dialogue generation model which effectively utilizes the large-scale commonsense and named entity based knowledge.
Our proposed model utilizes a multi-hop attention layer to preserve the most accurate and critical parts of the dialogue history and the associated knowledge.
Empirical results on two benchmark dataset demonstrate that our model significantly outperforms the state-of-the-art methods in terms of both automatic evaluation metrics and human judgment.
arXiv Detail & Related papers (2022-05-27T12:11:40Z) - Open-domain Dialogue Generation Grounded with Dynamic Multi-form
Knowledge Fusion [9.45662259790057]
This paper presents a new dialogue generation model, Dynamic Multi-form Knowledge Fusion based Open-domain Chatt-ing Machine (DMKCM)
DMKCM applies an indexed text (a virtual Knowledge Base) to locate relevant documents as 1st hop and then expands the content of the dialogue and its 1st hop using a commonsense knowledge graph to get apposite triples as 2nd hop.
Experimental results indicate the effectiveness of our method in terms of dialogue coherence and informativeness.
arXiv Detail & Related papers (2022-04-24T10:32:48Z) - Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph [124.45799297285083]
We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
arXiv Detail & Related papers (2020-09-24T13:55:32Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Connecting the Dots: A Knowledgeable Path Generator for Commonsense
Question Answering [50.72473345911147]
This paper augments a general commonsense QA framework with a knowledgeable path generator.
By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path.
arXiv Detail & Related papers (2020-05-02T03:53:21Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.