DiscoverLLM: From Executing Intents to Discovering Them
- URL: http://arxiv.org/abs/2602.03429v1
- Date: Tue, 03 Feb 2026 11:51:46 GMT
- Title: DiscoverLLM: From Executing Intents to Discovering Them
- Authors: Tae Soo Kim, Yoonjoo Lee, Jaesang Yu, John Joon Young Chung, Juho Kim,
- Abstract summary: We introduce DiscoverLLM, a framework that trains Large Language Models to help users form and discover intents.<n>Resulting models learn to collaborate with users by adaptively diverging (i.e., explore options) when intents are unclear.<n>In a user study with 75 human participants, DiscoverLLM improved conversation satisfaction and efficiency compared to baselines.
- Score: 30.142994019166796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To handle ambiguous and open-ended requests, Large Language Models (LLMs) are increasingly trained to interact with users to surface intents they have not yet expressed (e.g., ask clarification questions). However, users are often ambiguous because they have not yet formed their intents: they must observe and explore outcomes to discover what they want. Simply asking "what kind of tone do you want?" fails when users themselves do not know. We introduce DiscoverLLM, a novel and generalizable framework that trains LLMs to help users form and discover their intents. Central to our approach is a novel user simulator that models cognitive state with a hierarchy of intents that progressively concretize as the model surfaces relevant options -- where the degree of concretization serves as a reward signal that models can be trained to optimize. Resulting models learn to collaborate with users by adaptively diverging (i.e., explore options) when intents are unclear, and converging (i.e., refine and implement) when intents concretize. Across proposed interactive benchmarks in creative writing, technical writing, and SVG drawing, DiscoverLLM achieves over 10% higher task performance while reducing conversation length by up to 40%. In a user study with 75 human participants, DiscoverLLM improved conversation satisfaction and efficiency compared to baselines.
Related papers
- Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions [50.70965714314064]
Large Language Models (LLMs) are increasingly serving as personal assistants, where users share complex and diverse preferences over extended interactions.<n>This work proposes RealPref, a benchmark for evaluating realistic preference-following in personalized user-LLM interactions.
arXiv Detail & Related papers (2026-03-04T15:42:43Z) - CLEAR-KGQA: Clarification-Enhanced Ambiguity Resolution for Knowledge Graph Question Answering [13.624962763072899]
KGQA systems typically assume user queries are unambiguous, which is an assumption that rarely holds in real-world applications.<n>We propose a novel framework that dynamically handles both entity ambiguity (e.g., distinguishing between entities with similar names) and intent ambiguity (e.g., clarifying different interpretations of user queries) through interactive clarification.
arXiv Detail & Related papers (2025-04-13T17:34:35Z) - Mind the Gap! Static and Interactive Evaluations of Large Audio Models [55.87220295533817]
Large Audio Models (LAMs) are designed to power voice-native experiences.<n>This study introduces an interactive approach to evaluate LAMs and collect 7,500 LAM interactions from 484 participants.
arXiv Detail & Related papers (2025-02-21T20:29:02Z) - IntentGPT: Few-shot Intent Discovery with Large Language Models [9.245106106117317]
We develop a model capable of identifying new intents as they emerge.
IntentGPT is a training-free method that effectively prompts Large Language Models (LLMs) to discover new intents with minimal labeled data.
Our experiments show that IntentGPT outperforms previous methods that require extensive domain-specific data and fine-tuning.
arXiv Detail & Related papers (2024-11-16T02:16:59Z) - Unified Dual-Intent Translation for Joint Modeling of Search and Recommendation [44.59113848489519]
We propose a novel model named Unified Dual-Intents Translation for joint modeling of Search and Recommendation (UDITSR)
To accurately simulate users' demand intents in recommendation, we utilize real queries from search data as supervision information to guide its generation.
Extensive experiments demonstrate that UDITSR outperforms SOTA baselines both in search and recommendation tasks.
arXiv Detail & Related papers (2024-07-01T02:36:03Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Knowledge-Augmented Large Language Models for Personalized Contextual
Query Suggestion [16.563311988191636]
We construct an entity-centric knowledge store for each user based on their search and browsing activities on the web.
This knowledge store is light-weight, since it only produces user-specific aggregate projections of interests and knowledge onto public knowledge graphs.
arXiv Detail & Related papers (2023-11-10T01:18:47Z) - Eliciting Human Preferences with Language Models [56.68637202313052]
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
We propose to use *LMs themselves* to guide the task specification process.
We study GATE in three domains: email validation, content recommendation, and moral reasoning.
arXiv Detail & Related papers (2023-10-17T21:11:21Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - DIGMN: Dynamic Intent Guided Meta Network for Differentiated User
Engagement Forecasting in Online Professional Social Platforms [32.70471436337077]
A major reason for the differences in user engagement patterns is that users have different intents.
We propose a Dynamic Guided Meta Network (DIGMN) which can explicitly model user intent varying with time.
Our method outperforms state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-10-22T09:57:27Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.