An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems
- URL: http://arxiv.org/abs/2409.10515v1
- Date: Mon, 16 Sep 2024 17:59:50 GMT
- Title: An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems
- Authors: Hitesh Tulsiani, David M. Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister,
- Abstract summary: We introduce a general framework for ASR in dialog systems.
We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems.
- Score: 18.829793635104608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialog systems, such as voice assistants, are expected to engage with users in complex, evolving conversations. Unfortunately, traditional automatic speech recognition (ASR) systems deployed in such applications are usually trained to recognize each turn independently and lack the ability to adapt to the conversational context or incorporate user feedback. In this work, we introduce a general framework for ASR in dialog systems that can go beyond learning from single-turn utterances and learn over time how to adapt to both explicit supervision and implicit user feedback present in multi-turn conversations. We accomplish that by leveraging advances in student-teacher learning and context-aware dialog processing, and designing contrastive self-supervision approaches with Ohm, a new online hard-negative mining approach. We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems, and up to 26% on public synthetic data.
Related papers
- WavChat: A Survey of Spoken Dialogue Models [66.82775211793547]
Recent advancements in spoken dialogue models, exemplified by systems like GPT-4o, have captured significant attention in the speech domain.
These advanced spoken dialogue models not only comprehend audio, music, and other speech-related features, but also capture stylistic and timbral characteristics in speech.
Despite the progress in spoken dialogue systems, there is a lack of comprehensive surveys that systematically organize and analyze these systems.
arXiv Detail & Related papers (2024-11-15T04:16:45Z) - User Adaptive Language Learning Chatbots with a Curriculum [55.63893493019025]
We adapt lexically constrained decoding to a dialog system, which urges the dialog system to include curriculum-aligned words and phrases in its generated utterances.
The evaluation result demonstrates that the dialog system with curriculum infusion improves students' understanding of target words and increases their interest in practicing English.
arXiv Detail & Related papers (2023-04-11T20:41:41Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - A Review of Dialogue Systems: From Trained Monkeys to Stochastic Parrots [0.0]
We aim to deploy artificial intelligence to build automated dialogue agents that can converse with humans.
We present a broad overview of methods developed to build dialogue systems over the years.
arXiv Detail & Related papers (2021-11-02T08:07:55Z) - UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented
Dialogues [59.499965460525694]
We propose a unified dialogue system (UniDS) with the two aforementioned skills.
We design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues.
We train UniDS with mixed dialogue data from a pretrained chit-chat dialogue model.
arXiv Detail & Related papers (2021-10-15T11:56:47Z) - Alexa Conversations: An Extensible Data-driven Approach for Building
Task-oriented Dialogue Systems [21.98135285833616]
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation.
We present a new approach for building goal-oriented dialogue systems that is scalable, as well as data efficient.
arXiv Detail & Related papers (2021-04-19T07:09:27Z) - Action State Update Approach to Dialogue Management [16.602804535683553]
We propose the action state update approach (ASU) for utterance interpretation.
Our goal is to interpret referring expressions in user input without a domain-specific natural language understanding component.
With both user-simulated and interactive human evaluations, we show that the ASU approach successfully interprets user utterances in a dialogue system.
arXiv Detail & Related papers (2020-11-09T18:49:41Z) - Conversation Learner -- A Machine Teaching Tool for Building Dialog
Managers for Task-Oriented Dialog Systems [57.082447660944965]
Conversation Learner is a machine teaching tool for building dialog managers.
It enables dialog authors to create a dialog flow using familiar tools, converting the dialog flow into a parametric model.
It allows dialog authors to improve the dialog manager over time by leveraging user-system dialog logs as training data.
arXiv Detail & Related papers (2020-04-09T00:10:54Z) - Attention over Parameters for Dialogue Systems [69.48852519856331]
We learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP)
The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ, In-Car Assistant, and Persona-Chat.
arXiv Detail & Related papers (2020-01-07T03:10:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.