Knowledge Augmented BERT Mutual Network in Multi-turn Spoken Dialogues
- URL: http://arxiv.org/abs/2202.11299v1
- Date: Wed, 23 Feb 2022 04:03:35 GMT
- Title: Knowledge Augmented BERT Mutual Network in Multi-turn Spoken Dialogues
- Authors: Ting-Wei Wu and Biing-Hwang Juang
- Abstract summary: We propose to equip a BERT-based joint model with a knowledge attention module to mutually leverage dialogue contexts between two SLU tasks.
A gating mechanism is further utilized to filter out irrelevant knowledge triples and to circumvent distracting comprehension.
Experimental results in two complicated multi-turn dialogue datasets have demonstrate by mutually modeling two SLU tasks with filtered knowledge and dialogue contexts.
- Score: 6.4144180888492075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern spoken language understanding (SLU) systems rely on sophisticated
semantic notions revealed in single utterances to detect intents and slots.
However, they lack the capability of modeling multi-turn dynamics within a
dialogue particularly in long-term slot contexts. Without external knowledge,
depending on limited linguistic legitimacy within a word sequence may overlook
deep semantic information across dialogue turns. In this paper, we propose to
equip a BERT-based joint model with a knowledge attention module to mutually
leverage dialogue contexts between two SLU tasks. A gating mechanism is further
utilized to filter out irrelevant knowledge triples and to circumvent
distracting comprehension. Experimental results in two complicated multi-turn
dialogue datasets have demonstrate by mutually modeling two SLU tasks with
filtered knowledge and dialogue contexts, our approach has considerable
improvements compared with several competitive baselines.
Related papers
- MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU [9.047800457694656]
MIDAS is a novel approach, leveraging a multi-level intent, domain, and slot knowledge distillation for multi-turn NLU.
This paper introduces a novel approach, MIDAS, leveraging a multi-level intent, domain, and slot knowledge distillation for multi-turn NLU.
arXiv Detail & Related papers (2024-08-15T13:28:18Z) - Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs [4.449835214520727]
We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
arXiv Detail & Related papers (2024-08-02T08:07:15Z) - Towards Spoken Language Understanding via Multi-level Multi-grained Contrastive Learning [50.1035273069458]
Spoken language understanding (SLU) is a core task in task-oriented dialogue systems.
We propose a multi-level MMCL framework to apply contrastive learning at three levels, including utterance level, slot level, and word level.
Our framework achieves new state-of-the-art results on two public multi-intent SLU datasets.
arXiv Detail & Related papers (2024-05-31T14:34:23Z) - Distilling Implicit Multimodal Knowledge into LLMs for Zero-Resource Dialogue Generation [22.606764428110566]
We propose the Visual Implicit Knowledge Distillation Framework (VIKDF) for enriched dialogue generation in zero-resource contexts.
VIKDF comprises two main stages: knowledge distillation and knowledge integration.
Our experiments show that VIKDF outperforms existing state-of-the-art models in generating high-quality dialogues.
arXiv Detail & Related papers (2024-05-16T14:21:33Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - Who says like a style of Vitamin: Towards Syntax-Aware
DialogueSummarization using Multi-task Learning [2.251583286448503]
We focus on the association between utterances from individual speakers and unique syntactic structures.
Speakers have unique textual styles that can contain linguistic information, such as voiceprint.
We employ multi-task learning of both syntax-aware information and dialogue summarization.
arXiv Detail & Related papers (2021-09-29T05:30:39Z) - A Context-Aware Hierarchical BERT Fusion Network for Multi-turn Dialog
Act Detection [6.361198391681688]
CaBERT-SLU is a context-aware hierarchical BERT fusion Network (CaBERT-SLU)
Our approach reaches new state-of-the-art (SOTA) performances in two complicated multi-turn dialogue datasets.
arXiv Detail & Related papers (2021-09-03T02:00:03Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.