Intent Recognition and Out-of-Scope Detection using LLMs in Multi-party Conversations
- URL: http://arxiv.org/abs/2507.22289v1
- Date: Tue, 29 Jul 2025 23:48:41 GMT
- Title: Intent Recognition and Out-of-Scope Detection using LLMs in Multi-party Conversations
- Authors: Galo Castillo-López, Gaël de Chalendar, Nasredine Semmar,
- Abstract summary: We propose a hybrid approach to combine BERT and LLMs in zero and few-shot settings to recognize intents and detect OOS utterances.<n>We evaluate our method on multi-party conversation corpora and observe that sharing information from BERT outputs to LLMs leads to system performance improvement.
- Score: 0.6933787237427939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intent recognition is a fundamental component in task-oriented dialogue systems (TODS). Determining user intents and detecting whether an intent is Out-of-Scope (OOS) is crucial for TODS to provide reliable responses. However, traditional TODS require large amount of annotated data. In this work we propose a hybrid approach to combine BERT and LLMs in zero and few-shot settings to recognize intents and detect OOS utterances. Our approach leverages LLMs generalization power and BERT's computational efficiency in such scenarios. We evaluate our method on multi-party conversation corpora and observe that sharing information from BERT outputs to LLMs leads to system performance improvement.
Related papers
- Efficient Out-of-Scope Detection in Dialogue Systems via Uncertainty-Driven LLM Routing [6.579756339673344]
Out-of-scope (OOS) intent detection is a critical challenge in task-oriented dialogue systems (TODS)<n>We propose a novel but simple modular framework that combines uncertainty modeling with fine-tuned large language models (LLMs) for efficient and accurate OOS detection.
arXiv Detail & Related papers (2025-07-02T09:51:41Z) - LANID: LLM-assisted New Intent Discovery [18.15557766598695]
New Intent Discovery (NID) is a crucial task that aims to identify novel intents while maintaining the capability to recognize existing ones.<n>Previous efforts to adapt TODS to new intents have struggled with inadequate semantic representation.<n>We propose LANID, a framework that enhances the semantic representation of lightweight NID encoders with the guidance of Large Language Models.
arXiv Detail & Related papers (2025-03-31T05:34:32Z) - Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs [0.3249879651054463]
Large language models (LLMs) have showcased remarkable capabilities in conversational AI.<n>These models are resource-intensive, demanding substantial memory and computational power.<n>We propose a cost-effective solution that filters conversational snippets of interest for LLM processing, tailored to the target downstream application.
arXiv Detail & Related papers (2025-03-21T17:34:37Z) - Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based Inference [78.08901120841833]
We propose a method to detect the knowledge boundary of Visual Large Language Models (VLLMs)<n>We show that our method successfully depicts a VLLM's knowledge boundary based on which we are able to reduce indiscriminate retrieval while maintaining or improving the performance.
arXiv Detail & Related papers (2025-02-25T09:32:08Z) - Latent Factor Models Meets Instructions: Goal-conditioned Latent Factor Discovery without Task Supervision [50.45597801390757]
Instruct-LF is a goal-oriented latent factor discovery system.<n>It integrates instruction-following ability with statistical models to handle noisy datasets.
arXiv Detail & Related papers (2025-02-21T02:03:08Z) - Beyond Binary: Towards Fine-Grained LLM-Generated Text Detection via Role Recognition and Involvement Measurement [51.601916604301685]
Large language models (LLMs) generate content that can undermine trust in online discourse.<n>Current methods often focus on binary classification, failing to address the complexities of real-world scenarios like human-LLM collaboration.<n>To move beyond binary classification and address these challenges, we propose a new paradigm for detecting LLM-generated content.
arXiv Detail & Related papers (2024-10-18T08:14:10Z) - Intent Detection in the Age of LLMs [3.755082744150185]
Intent detection is a critical component of task-oriented dialogue systems (TODS)
Traditional approaches relied on computationally efficient supervised sentence transformer encoder models.
The emergence of generative large language models (LLMs) with intrinsic world knowledge presents new opportunities to address these challenges.
arXiv Detail & Related papers (2024-10-02T15:01:55Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.<n>We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - RAR: Retrieving And Ranking Augmented MLLMs for Visual Recognition [78.97487780589574]
Multimodal Large Language Models (MLLMs) excel at classifying fine-grained categories.
This paper introduces a Retrieving And Ranking augmented method for MLLMs.
Our proposed approach not only addresses the inherent limitations in fine-grained recognition but also preserves the model's comprehensive knowledge base.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - Knowledge-Retrieval Task-Oriented Dialog Systems with Semi-Supervision [22.249113574918034]
Most existing task-oriented dialog (TOD) systems track dialog states in terms of slots and values and use them to query a database to get relevant knowledge to generate responses.
In real-life applications, user utterances are noisier, and thus it is more difficult to accurately track dialog states and correctly secure relevant knowledge.
Inspired by such progress, we propose a retrieval-based method to enhance knowledge selection in TOD systems, which outperforms the traditional database query method for real-life dialogs.
arXiv Detail & Related papers (2023-05-22T16:29:20Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.