Text-Based Detection of On-Hold Scripts in Contact Center Calls
- URL: http://arxiv.org/abs/2407.09849v1
- Date: Sat, 13 Jul 2024 11:11:41 GMT
- Title: Text-Based Detection of On-Hold Scripts in Contact Center Calls
- Authors: Dmitrii Galimzianov, Viacheslav Vyshegorodtsev,
- Abstract summary: Average hold time is a concern for call centers because it affects customer satisfaction.
This study presents a natural language processing model that detects on-hold phrases in customer service calls transcribed by automatic speech recognition technology.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Average hold time is a concern for call centers because it affects customer satisfaction. Contact centers should instruct their agents to use special on-hold scripts to maintain positive interactions with clients. This study presents a natural language processing model that detects on-hold phrases in customer service calls transcribed by automatic speech recognition technology. The task of finding hold scripts in dialogue was formulated as a multiclass text classification problem with three mutually exclusive classes: scripts for putting a client on hold, scripts for returning to a client, and phrases irrelevant to on-hold scripts. We collected an in-house dataset of calls and labeled each dialogue turn in each call. We fine-tuned RuBERT on the dataset by exploring various hyperparameter sets and achieved high model performance. The developed model can help agent monitoring by providing a way to check whether an agent follows predefined on-hold scripts.
Related papers
- Conversational Rubert for Detecting Competitive Interruptions in ASR-Transcribed Dialogues [0.6138671548064356]
A system that automatically classifies interruptions can be used in call centers, specifically in the tasks of customer satisfaction monitoring and agent monitoring.
We developed a text-based interruption classification model by preparing an in-house dataset consisting of ASR-transcribed customer support telephone dialogues in Russian.
arXiv Detail & Related papers (2024-07-20T17:25:53Z) - Identifying Speakers in Dialogue Transcripts: A Text-based Approach Using Pretrained Language Models [83.7506131809624]
We introduce an approach to identifying speaker names in dialogue transcripts, a crucial task for enhancing content accessibility and searchability in digital media archives.
We present a novel, large-scale dataset derived from the MediaSum corpus, encompassing transcripts from a wide range of media sources.
We propose novel transformer-based models tailored for SpeakerID, leveraging contextual cues within dialogues to accurately attribute speaker names.
arXiv Detail & Related papers (2024-07-16T18:03:58Z) - Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users [51.34484827552774]
We release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent.
These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios.
We propose a novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query.
arXiv Detail & Related papers (2023-10-31T14:12:07Z) - Attribute Controlled Dialogue Prompting [31.09791656949115]
We present a novel, instance-specific prompt-tuning algorithm for dialogue generation.
Our method is superior to prompting baselines and comparable to fine-tuning with only 5%-6% of total parameters.
arXiv Detail & Related papers (2023-07-11T12:48:55Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - VScript: Controllable Script Generation with Audio-Visual Presentation [56.17400243061659]
VScript is a controllable pipeline that generates complete scripts including dialogues and scene descriptions.
We adopt a hierarchical structure, which generates the plot, then the script and its audio-visual presentation.
Experiment results show that our approach outperforms the baselines on both automatic and human evaluations.
arXiv Detail & Related papers (2022-03-01T09:43:02Z) - Dialog Simulation with Realistic Variations for Training Goal-Oriented
Conversational Systems [14.206866126142002]
Goal-oriented dialog systems enable users to complete specific goals like requesting information about a movie or booking a ticket.
We propose an approach for automatically creating a large corpus of annotated dialogs from a few thoroughly annotated sample dialogs and the dialog schema.
We achieve 18? 50% relative accuracy on a held-out test set compared to a baseline dialog generation approach.
arXiv Detail & Related papers (2020-11-16T19:39:15Z) - Turn-level Dialog Evaluation with Dialog-level Weak Signals for
Bot-Human Hybrid Customer Service Systems [0.0]
We developed a machine learning approach that quantifies multiple aspects of the success or values in Customer Service contacts, at anytime during the interaction.
We show how it improves Amazon customer service quality in several applications.
arXiv Detail & Related papers (2020-10-25T19:36:23Z) - Interview: A Large-Scale Open-Source Corpus of Media Dialog [11.28504775964698]
We introduce 'Interview': a large-scale (105K conversations) media dialog dataset collected from news interview transcripts.
Compared to existing large-scale proxies for conversational data, language models trained on our dataset exhibit better zero-shot out-of-domain performance.
'Interview' contains speaker role annotations for each turn, facilitating the development of engaging, responsive dialog systems.
arXiv Detail & Related papers (2020-04-07T02:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.