Teacher-Student Learning on Complexity in Intelligent Routing
- URL: http://arxiv.org/abs/2402.15665v1
- Date: Sat, 24 Feb 2024 00:40:40 GMT
- Title: Teacher-Student Learning on Complexity in Intelligent Routing
- Authors: Shu-Ting Pi, Michael Yang, Yuying Zhu, Qun Liu
- Abstract summary: We develop a machine learning framework that predicts the complexity of customer contacts and routes them to appropriate agents.
Experiments show that such a framework is successful and can significantly improve customer experience.
We propose a useful metric called complexity AUC that evaluates the effectiveness of customer service at a statistical level.
- Score: 19.32977689162711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Customer service is often the most time-consuming aspect for e-commerce
websites, with each contact typically taking 10-15 minutes. Effectively routing
customers to appropriate agents without transfers is therefore crucial for
e-commerce success. To this end, we have developed a machine learning framework
that predicts the complexity of customer contacts and routes them to
appropriate agents accordingly. The framework consists of two parts. First, we
train a teacher model to score the complexity of a contact based on the
post-contact transcripts. Then, we use the teacher model as a data annotator to
provide labels to train a student model that predicts the complexity based on
pre-contact data only. Our experiments show that such a framework is successful
and can significantly improve customer experience. We also propose a useful
metric called complexity AUC that evaluates the effectiveness of customer
service at a statistical level.
Related papers
- Active-Passive Federated Learning for Vertically Partitioned Multi-view Data [48.985955382701185]
We propose a flexible Active-Passive Federated learning (APFed) framework.
Active client is the initiator of a learning task and responsible to build the complete model, while the passive clients only serve as assistants.
In addition, we instance the APFed framework into two classification methods with employing the reconstruction loss and the contrastive loss on passive clients, respectively.
arXiv Detail & Related papers (2024-09-06T08:28:35Z) - Contact Complexity in Customer Service [21.106010378612876]
Customers who reach out for customer service support may face a range of issues that vary in complexity.
To tackle this, a machine learning model that accurately predicts the complexity of customer issues is highly desirable.
We have developed a novel machine learning approach to define contact complexity.
arXiv Detail & Related papers (2024-02-24T00:09:27Z) - Retrieval as Attention: End-to-end Learning of Retrieval and Reading
within a Single Transformer [80.50327229467993]
We show that a single model trained end-to-end can achieve both competitive retrieval and QA performance.
We show that end-to-end adaptation significantly boosts its performance on out-of-domain datasets in both supervised and unsupervised settings.
arXiv Detail & Related papers (2022-12-05T04:51:21Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Aggregated Customer Engagement Model [0.571097144710995]
E-commerce websites use machine learned ranking models to serve shopping results to customers.
New or under-impressed products do not have enough customer engagement signals and end up at a disadvantage when being ranked alongside popular products.
We propose a novel method for data curation that aggregates all customer engagements within a day for the same query to use as input training data.
arXiv Detail & Related papers (2021-08-17T20:58:10Z) - Federated Action Recognition on Heterogeneous Embedded Devices [16.88104153104136]
In this work, we enable clients with limited computing power to perform action recognition, a computationally heavy task.
We first perform model compression at the central server through knowledge distillation on a large dataset.
The fine-tuning is required because limited data present in smaller datasets is not adequate for action recognition models to learn complextemporal features.
arXiv Detail & Related papers (2021-07-18T02:33:24Z) - A Semi-supervised Multi-task Learning Approach to Classify Customer
Contact Intents [6.267558847860381]
We build text-based intent classification models for a customer support service on an E-commerce website.
We improve the performance significantly by evolving the model from multiclass classification to semi-supervised multi-task learning.
In the evaluation, the final model boosts the average AUC ROC by almost 20 points compared to the baseline finetuned multiclass classification ALBERT model.
arXiv Detail & Related papers (2021-06-10T16:13:05Z) - Distantly Supervised Transformers For E-Commerce Product QA [5.460297795256275]
We propose a practical instant question answering (QA) system on product pages of ecommerce services.
For each user query, relevant community question answer (CQA) pairs are retrieved.
Our proposed transformer-based model learns a robust relevance function by jointly learning unified syntactic and semantic representations.
arXiv Detail & Related papers (2021-04-07T06:37:16Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.