Just-In-Time Objectives: A General Approach for Specialized AI Interactions
- URL: http://arxiv.org/abs/2510.14591v1
- Date: Thu, 16 Oct 2025 11:53:17 GMT
- Title: Just-In-Time Objectives: A General Approach for Specialized AI Interactions
- Authors: Michelle S. Lam, Omar Shaikh, Hallie Xu, Alice Guo, Diyi Yang, Jeffrey Heer, James A. Landay, Michael S. Bernstein,
- Abstract summary: Large language models promise a broad set of functions, but when not given a specific objective, they default to milquetoast results such as drafting emails littered with cliches.<n>We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce tools, interfaces, and responses that are more responsive and desired.
- Score: 55.20968270133027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models promise a broad set of functions, but when not given a specific objective, they default to milquetoast results such as drafting emails littered with cliches. We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce tools, interfaces, and responses that are more responsive and desired. We contribute an architecture for automatically inducing just-in-time objectives by passively observing user behavior, then steering downstream AI systems through generation and evaluation against this objective. Inducing just-in-time objectives (e.g., "Clarify the abstract's research contribution") enables automatic generation of tools, e.g., those that critique a draft based on relevant HCI methodologies, anticipate related researchers' reactions, or surface ambiguous terminology. In a series of experiments (N=14, N=205) on participants' own tasks, JIT objectives enable LLM outputs that achieve 66-86% win rates over typical LLMs, and in-person use sessions (N=17) confirm that JIT objectives produce specialized tools unique to each participant.
Related papers
- Towards Agentic Intelligence for Materials Science [73.4576385477731]
This survey advances a unique pipeline-centric view that spans from corpus curation and pretraining to goal-conditioned agents interfacing with simulation and experimental platforms.<n>To bridge communities and establish a shared frame of reference, we first present an integrated lens that aligns terminology, evaluation, and workflow stages across AI and materials science.
arXiv Detail & Related papers (2026-01-29T23:48:43Z) - LLM Agents Beyond Utility: An Open-Ended Perspective [50.809163251551894]
We augment a pretrained LLM agent with the ability to generate its own tasks, accumulate knowledge, and interact extensively with its environment.<n>It can reliably follow complex multi-step instructions, store and reuse information across runs, and propose and solve its own tasks.<n>It remains sensitive to prompt design, prone to repetitive task generation, and unable to form self-representations.
arXiv Detail & Related papers (2025-10-16T10:46:54Z) - Evaluating the Goal-Directedness of Large Language Models [17.08087240111954]
We evaluate goal-directedness on tasks that require information gathering, cognitive effort, and plan execution.<n>Our evaluations of LLMs from Google DeepMind, OpenAI, and Anthropic show that goal-directedness is relatively consistent across tasks.
arXiv Detail & Related papers (2025-04-16T08:07:08Z) - Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger [49.81945268343162]
We propose MeCo, an adaptive decision-making strategy for external tool use.<n>MeCo quantifies metacognitive scores by capturing high-level cognitive signals in the representation space.<n>MeCo is fine-tuning-free and incurs minimal cost.
arXiv Detail & Related papers (2025-02-18T15:45:01Z) - Deception in LLMs: Self-Preservation and Autonomous Goals in Large Language Models [0.0]
Recent advances in Large Language Models have incorporated planning and reasoning capabilities.<n>This has reduced errors in mathematical and logical tasks while improving accuracy.<n>Our study examines DeepSeek R1, a model trained to output reasoning tokens similar to OpenAI's o1.
arXiv Detail & Related papers (2025-01-27T21:26:37Z) - Goal-Conditioned Supervised Learning for Multi-Objective Recommendation [8.593384839118658]
Multi-objective learning endeavors to concurrently optimize multiple objectives using a single model.<n>This paper introduces a Multi-Objective Goal-Conditioned Supervised Learning framework for automatically learning to achieve multiple objectives from offline sequential data.
arXiv Detail & Related papers (2024-12-12T03:47:40Z) - LLM With Tools: A Survey [0.0]
This paper delves into the methodology,challenges, and developments in the realm of teaching LLMs to use external tools.
We introduce a standardized paradigm for tool integration guided by a series of functions that map user instructions to actionable plans.
Our exploration reveals the various challenges encountered, such as tool invocation timing, selection accuracy, and the need for robust reasoning processes.
arXiv Detail & Related papers (2024-09-24T14:08:11Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - CACTUS: Detecting and Resolving Conflicts in Objective Functions [16.784454432715712]
In multi-objective optimization, conflicting objectives and constraints is a major area of concern.
In this paper, we extend this line of work by prototyping a technique to visualize multi-objective objective functions.
We show that our technique helps users interactively specify meaningful objective functions by resolving potential conflicts for a classification task.
arXiv Detail & Related papers (2021-03-13T22:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.