WEBDial, a Multi-domain, Multitask Statistical Dialogue Framework with
RDF
- URL: http://arxiv.org/abs/2401.03905v1
- Date: Mon, 8 Jan 2024 14:08:33 GMT
- Title: WEBDial, a Multi-domain, Multitask Statistical Dialogue Framework with
RDF
- Authors: Morgan Veyret, Jean-Baptiste Duchene, Kekeli Afonouvi, Quentin
Brabant, Gwenole Lecorve and Lina M. Rojas-Barahona
- Abstract summary: We present a dialogue framework that relies on a graph formalism by using RDF triples instead of slot-value pairs.
We show its applicability from simple to complex applications, by varying the complexity of domains and tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Typically available dialogue frameworks have adopted a semantic
representation based on dialogue-acts and slot-value pairs. Despite its
simplicity, this representation has disadvantages such as the lack of
expressivity, scalability and explainability. We present WEBDial: a dialogue
framework that relies on a graph formalism by using RDF triples instead of
slot-value pairs. We describe its overall architecture and the graph-based
semantic representation. We show its applicability from simple to complex
applications, by varying the complexity of domains and tasks: from single
domain and tasks to multiple domains and complex tasks.
Related papers
- InstructERC: Reforming Emotion Recognition in Conversation with Multi-task Retrieval-Augmented Large Language Models [9.611864685207056]
We propose a novel approach, InstructERC, to reformulate the emotion recognition task from a discriminative framework to a generative framework based on Large Language Models (LLMs)
InstructERC makes three significant contributions: (1) it introduces a simple yet effective retrieval template module, which helps the model explicitly integrate multi-granularity dialogue supervision information; (2) we introduce two additional emotion alignment tasks, namely speaker identification and emotion prediction tasks, to implicitly model the dialogue role relationships and future emotional tendencies in conversations; and (3) Pioneeringly, we unify emotion labels across benchmarks through the feeling wheel to fit real application scenarios.
arXiv Detail & Related papers (2023-09-21T09:22:07Z) - 'What are you referring to?' Evaluating the Ability of Multi-Modal
Dialogue Models to Process Clarificational Exchanges [65.03196674816772]
Referential ambiguities arise in dialogue when a referring expression does not uniquely identify the intended referent for the addressee.
Addressees usually detect such ambiguities immediately and work with the speaker to repair it using meta-communicative, Clarification Exchanges (CE): a Clarification Request (CR) and a response.
Here, we argue that the ability to generate and respond to CRs imposes specific constraints on the architecture and objective functions of multi-modal, visually grounded dialogue models.
arXiv Detail & Related papers (2023-07-28T13:44:33Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment [58.46761798403072]
A model-based automatic dialogue evaluation metric (ADEM) is expected to perform well across multiple domains.
Despite significant progress, an ADEM that works well in one domain does not necessarily generalize to another.
We propose a Panel of Experts (PoE) network that consists of a shared transformer encoder and a collection of lightweight adapters.
arXiv Detail & Related papers (2022-12-18T02:26:50Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - Dialogue Meaning Representation for Task-Oriented Dialogue Systems [51.91615150842267]
We propose Dialogue Meaning Representation (DMR), a flexible and easily extendable representation for task-oriented dialogue.
Our representation contains a set of nodes and edges with inheritance hierarchy to represent rich semantics for compositional semantics and task-specific concepts.
We propose two evaluation tasks to evaluate different machine learning based dialogue models, and further propose a novel coreference resolution model GNNCoref for the graph-based coreference resolution task.
arXiv Detail & Related papers (2022-04-23T04:17:55Z) - Show, Don't Tell: Demonstrations Outperform Descriptions for
Schema-Guided Task-Oriented Dialogue [27.43338545216015]
Show, Don't Tell is a prompt format for seq2seq modeling which uses a short labeled example dialogue to show the semantics of schema elements.
While requiring similar effort from service developers, we show that using short examples as schema representations with large language models results in stronger performance and better generalization.
arXiv Detail & Related papers (2022-04-08T23:27:18Z) - Meta-Context Transformers for Domain-Specific Response Generation [4.377737808397113]
We present DSRNet, a transformer-based model for dialogue response generation by reinforcing domain-specific attributes.
We study the use of DSRNet in a multi-turn multi-interlocutor environment for domain-specific response generation.
Our model shows significant improvement over the state-of-the-art for multi-turn dialogue systems supported by better BLEU and semantic similarity (BertScore) scores.
arXiv Detail & Related papers (2020-10-12T09:49:27Z) - UniConv: A Unified Conversational Neural Architecture for Multi-domain
Task-oriented Dialogues [101.96097419995556]
"UniConv" is a novel unified neural architecture for end-to-end conversational systems in task-oriented dialogues.
We conduct comprehensive experiments in dialogue state tracking, context-to-text, and end-to-end settings on the MultiWOZ2.1 benchmark.
arXiv Detail & Related papers (2020-04-29T16:28:22Z) - MA-DST: Multi-Attention Based Scalable Dialog State Tracking [13.358314140896937]
Dialog State Tracking dialog agents provide a natural language interface for users to complete their goal.
To enable accurate multi-domain DST, the model needs to encode dependencies between past utterances and slot semantics.
We introduce a novel architecture for this task to encode the conversation history and slot semantics.
arXiv Detail & Related papers (2020-02-07T05:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.