Intent Detection at Scale: Tuning a Generic Model using Relevant Intents
- URL: http://arxiv.org/abs/2309.08647v1
- Date: Fri, 15 Sep 2023 13:15:20 GMT
- Title: Intent Detection at Scale: Tuning a Generic Model using Relevant Intents
- Authors: Nichal Narotamo, David Aparicio, Tiago Mesquita, Mariana Almeida
- Abstract summary: This work proposes a system to scale intent predictions to various clients effectively, by combining a single generic model with a per-client list of relevant intents.
Our approach minimizes training and maintenance costs while providing a personalized experience for clients, allowing for seamless adaptation to changes in their relevant intents.
The final system exhibits significantly superior performance compared to industry-specific models, showcasing its flexibility and ability to cater to diverse client needs.
- Score: 0.5461938536945723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately predicting the intent of customer support requests is vital for
efficient support systems, enabling agents to quickly understand messages and
prioritize responses accordingly. While different approaches exist for intent
detection, maintaining separate client-specific or industry-specific models can
be costly and impractical as the client base expands.
This work proposes a system to scale intent predictions to various clients
effectively, by combining a single generic model with a per-client list of
relevant intents. Our approach minimizes training and maintenance costs while
providing a personalized experience for clients, allowing for seamless
adaptation to changes in their relevant intents. Furthermore, we propose a
strategy for using the clients relevant intents as model features that proves
to be resilient to changes in the relevant intents of clients -- a common
occurrence in production environments.
The final system exhibits significantly superior performance compared to
industry-specific models, showcasing its flexibility and ability to cater to
diverse client needs.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - PraFFL: A Preference-Aware Scheme in Fair Federated Learning [5.9403570178003395]
We propose a Preference-aware scheme in Fair Federated Learning paradigm (called PraFFL) to generate preference-wise model in real time.
We theoretically prove that PraFFL can offer the optimal model tailored to an arbitrary preference of each client, and show its linear convergence.
arXiv Detail & Related papers (2024-04-13T11:40:05Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - Customer Churn Prediction Model using Explainable Machine Learning [0.0]
Key objective of the paper is to develop a unique Customer churn prediction model which can help to predict potential customers who are most likely to churn.
We evaluated and analyzed the performance of various tree-based machine learning approaches and algorithms.
In order to improve Model explainability and transparency, paper proposed a novel approach to calculate Shapley values for possible combination of features.
arXiv Detail & Related papers (2023-03-02T04:45:57Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - PaDPaF: Partial Disentanglement with Partially-Federated GANs [5.195669033269619]
Federated learning has become a popular machine learning paradigm with many potential real-life applications.
This work proposes a novel architecture combining global client-agnostic and local client-specific generative models.
We show that our proposed model achieves privacy and personalization by implicitly disentangling the globally consistent representation.
arXiv Detail & Related papers (2022-12-07T18:28:54Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Client Adaptation improves Federated Learning with Simulated Non-IID
Clients [1.0896567381206714]
We present a federated learning approach for learning a client adaptable, robust model when data is non-identically and non-independently distributed (non-IID) across clients.
We show that adding learned client-specific conditioning improves model performance, and the approach is shown to work on balanced and imbalanced data set from both audio and image domains.
arXiv Detail & Related papers (2020-07-09T13:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.