Dynamic Knowledge Graph-based Dialogue Generation with Improved
Adversarial Meta-Learning
- URL: http://arxiv.org/abs/2004.08833v1
- Date: Sun, 19 Apr 2020 12:27:49 GMT
- Title: Dynamic Knowledge Graph-based Dialogue Generation with Improved
Adversarial Meta-Learning
- Authors: Hongcai Xu, Junpeng Bao, Gaojie Zhang
- Abstract summary: This paper proposes a dynamic Knowledge graph-based dialogue generation method with improved adversarial Meta-Learning (KDAD)
KDAD formulates dynamic knowledge triples as a problem of adversarial attack and incorporates the objective of quickly adapting to dynamic knowledge-aware dialogue generation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph-based dialogue systems are capable of generating more
informative responses and can implement sophisticated reasoning mechanisms.
However, these models do not take into account the sparseness and
incompleteness of knowledge graph (KG)and current dialogue models cannot be
applied to dynamic KG. This paper proposes a dynamic Knowledge graph-based
dialogue generation method with improved adversarial Meta-Learning (KDAD). KDAD
formulates dynamic knowledge triples as a problem of adversarial attack and
incorporates the objective of quickly adapting to dynamic knowledge-aware
dialogue generation. We train a knowledge graph-based dialog model with
improved ADML using minimal training samples. The model can initialize the
parameters and adapt to previous unseen knowledge so that training can be
quickly completed based on only a few knowledge triples. We show that our model
significantly outperforms other baselines. We evaluate and demonstrate that our
method adapts extremely fast and well to dynamic knowledge graph-based dialogue
generation.
Related papers
- Knowledge acquisition for dialogue agents using reinforcement learning on graph representations [2.3851115175441193]
We develop an artificial agent motivated to augment its knowledge base beyond its initial training.
The agent actively participates in dialogues with other agents, strategically acquiring new information.
We show that policies can be learned using reinforcement learning to select effective graph patterns during an interaction.
arXiv Detail & Related papers (2024-06-27T19:28:42Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Building Knowledge-Grounded Dialogue Systems with Graph-Based Semantic Modeling [43.0554223015728]
The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents.
We propose a novel graph structure, Grounded Graph, that models the semantic structure of both dialogue and knowledge.
We also propose a Grounded Graph Aware Transformer to enhance knowledge-grounded response generation.
arXiv Detail & Related papers (2022-04-27T03:31:46Z) - Grounding Dialogue Systems via Knowledge Graph Aware Decoding with
Pre-trained Transformers [3.477557431978457]
Knowledge Graphs can potentially facilitate a dialogue system to produce knowledge grounded responses.
This paper proposes a novel architecture for integrating KGs into the response generation process by training a BERT model.
The k-hop subgraph of the KG is incorporated into the model during training and inference using Graph Laplacian.
arXiv Detail & Related papers (2021-03-30T12:36:00Z) - GRADE: Automatic Graph-Enhanced Coherence Metric for Evaluating
Open-Domain Dialogue Systems [133.13117064357425]
We propose a new evaluation metric GRADE, which stands for Graph-enhanced Representations for Automatic Dialogue Evaluation.
Specifically, GRADE incorporates both coarse-grained utterance-level contextualized representations and fine-grained topic-level graph representations to evaluate dialogue coherence.
Experimental results show that our GRADE significantly outperforms other state-of-the-art metrics on measuring diverse dialogue models.
arXiv Detail & Related papers (2020-10-08T14:07:32Z) - GraphDialog: Integrating Graph Knowledge into End-to-End Task-Oriented
Dialogue Systems [9.560436630775762]
End-to-end task-oriented dialogue systems aim to generate system responses directly from plain text inputs.
One is how to effectively incorporate external knowledge bases (KBs) into the learning framework; the other is how to accurately capture the semantics of dialogue history.
We address these two challenges by exploiting the graph structural information in the knowledge base and in the dependency parsing tree of the dialogue.
arXiv Detail & Related papers (2020-10-04T00:04:40Z) - Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph [124.45799297285083]
We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
arXiv Detail & Related papers (2020-09-24T13:55:32Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Knowledge-graph based Proactive Dialogue Generation with Improved
Meta-Learning [0.0]
We propose a knowledge graph based proactive dialogue generation model (KgDg) with three components.
For knowledge triplets embedding and selection, we formulate it as a problem of sentence embedding to better capture semantic information.
Our improved MAML algorithm is capable of learning general features from a limited number of knowledge graphs.
arXiv Detail & Related papers (2020-04-19T08:41:12Z) - Low-Resource Knowledge-Grounded Dialogue Generation [74.09352261943913]
We consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available.
We devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
With only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
arXiv Detail & Related papers (2020-02-24T16:20:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.