When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design
- URL: http://arxiv.org/abs/2602.22814v1
- Date: Thu, 26 Feb 2026 09:56:37 GMT
- Title: When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design
- Authors: Soyoung Jung, Daehoo Yoon, Sung Gyu Koh, Young Hwan Kim, Yehan Ahn, Sung Park,
- Abstract summary: Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data.<n>We propose a conceptual model that reframes behavior as an outcome integrating interpretive Scene, Context, and Human Behavior Factors.<n>We derive five agent design principles that guide intervention depth, timing, intensity, and restraint.
- Score: 0.44743648495907423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data yet often fails for lack of principled judgment about when, why, and whether to act. We address this gap by proposing a conceptual model that reframes behavior as an interpretive outcome integrating Scene (observable situation), Context (user-constructed meaning), and Human Behavior Factors (determinants shaping behavioral likelihood). Grounded in multidisciplinary perspectives across the humanities, social sciences, HCI, and engineering, the model separates what is observable from what is meaningful to the user and explains how the same scene can yield different behavioral meanings and outcomes. To translate this lens into design action, we derive five agent design principles (behavioral alignment, contextual sensitivity, temporal appropriateness, motivational calibration, and agency preservation) that guide intervention depth, timing, intensity, and restraint. Together, the model and principles provide a foundation for designing agentic AI systems that act with contextual sensitivity and judgment in interactions.
Related papers
- Agentic Reasoning for Large Language Models [122.81018455095999]
Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making.<n>Large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, but struggle in open-ended and dynamic environments.<n>Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction.
arXiv Detail & Related papers (2026-01-18T18:58:23Z) - OntoPret: An Ontology for the Interpretation of Human Behavior [0.024466725954625887]
A research gap exists between techno centric robotic frameworks, which often lack nuanced models of human behavior, and collaborative interpretation.<n>This paper addresses this gap by presenting OntoPret, an adaptability for the interpretation of human behavior.
arXiv Detail & Related papers (2025-10-27T17:28:51Z) - DeceptionBench: A Comprehensive Benchmark for AI Deception Behaviors in Real-world Scenarios [57.327907850766785]
characterization of deception across realistic real-world scenarios remains underexplored.<n>We establish DeceptionBench, the first benchmark that systematically evaluates how deceptive tendencies manifest across different domains.<n>On the intrinsic dimension, we explore whether models exhibit self-interested egoistic tendencies or sycophantic behaviors that prioritize user appeasement.<n>We incorporate sustained multi-turn interaction loops to construct a more realistic simulation of real-world feedback dynamics.
arXiv Detail & Related papers (2025-10-17T10:14:26Z) - Understanding and evaluating computer vision models through the lens of counterfactuals [2.2819712364325047]
This thesis develops frameworks that use counterfactuals to explain, audit, and mitigate bias in vision classifiers and generative models.<n>By systematically altering semantically meaningful attributes while holding others fixed, these methods uncover spurious correlations.<n>These contributions show counterfactuals as a unifying lens for interpretability, fairness, and causality in both discriminative and generative models.
arXiv Detail & Related papers (2025-08-28T15:11:49Z) - The Agent Behavior: Model, Governance and Challenges in the AI Digital Age [13.689486430780518]
Advancements in AI have led to agents in networked environments increasingly mirroring human behavior.<n>This paper proposes the "Network Behavior Lifecycle" model, which divides network behavior into 6 stages and systematically analyzes the behavioral differences between humans and agents at each stage.<n>The paper further introduces the "Agent for Agent (A4A)" paradigm and the "Human-Agent Behavioral Disparity (HABD)" model, which examine the fundamental distinctions between human and agent behaviors across 5 dimensions.
arXiv Detail & Related papers (2025-08-20T04:24:55Z) - Intention-Guided Cognitive Reasoning for Egocentric Long-Term Action Anticipation [52.6091162517921]
INSIGHT is a two-stage framework for egocentric action anticipation.<n>In the first stage, INSIGHT focuses on extracting semantically rich features from hand-object interaction regions.<n>In the second stage, it introduces a reinforcement learning-based module that simulates explicit cognitive reasoning.
arXiv Detail & Related papers (2025-08-03T12:52:27Z) - AI Agent Behavioral Science [29.262537008412412]
AI Agent Behavioral Science focuses on the systematic observation of behavior, design of interventions to test hypotheses, and theory-guided interpretation of how AI agents act, adapt, and interact over time.<n>We systematize a growing body of research across individual agent, multi-agent, and human-agent interaction settings, and demonstrate how this perspective informs responsible AI by treating fairness, safety, interpretability, accountability, and privacy as behavioral properties.
arXiv Detail & Related papers (2025-06-04T08:12:32Z) - Teleology-Driven Affective Computing: A Causal Framework for Sustained Well-Being [0.1636303041090359]
We propose a teleology-driven affective computing framework that unifies major emotion theories.<n>We advocate for creating a "dataverse" of personal affective events.<n>We introduce a meta-reinforcement learning paradigm to train agents in simulated environments.
arXiv Detail & Related papers (2025-02-24T14:07:53Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.