Agent Teaming Situation Awareness (ATSA): A Situation Awareness
Framework for Human-AI Teaming
- URL: http://arxiv.org/abs/2308.16785v2
- Date: Mon, 4 Sep 2023 12:23:36 GMT
- Title: Agent Teaming Situation Awareness (ATSA): A Situation Awareness
Framework for Human-AI Teaming
- Authors: Qi Gao, Wei Xu, Mowei Shen, Zaifeng Gao
- Abstract summary: We provide a review of leading SA theoretical models and a new framework for SA in the HAT context.
The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction.
- Score: 10.712812672157611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancements in artificial intelligence (AI) have led to a growing
trend of human-AI teaming (HAT) in various fields. As machines continue to
evolve from mere automation to a state of autonomy, they are increasingly
exhibiting unexpected behaviors and human-like cognitive/intelligent
capabilities, including situation awareness (SA). This shift has the potential
to enhance the performance of mixed human-AI teams over all-human teams,
underscoring the need for a better understanding of the dynamic SA interactions
between humans and machines. To this end, we provide a review of leading SA
theoretical models and a new framework for SA in the HAT context based on the
key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA)
framework unifies human and AI behavior, and involves bidirectional, and
dynamic interaction. The framework is based on the individual and team SA
models and elaborates on the cognitive mechanisms for modeling HAT. Similar
perceptual cycles are adopted for the individual (including both human and AI)
and the whole team, which is tailored to the unique requirements of the HAT
context. ATSA emphasizes cohesive and effective HAT through structures and
components, including teaming understanding, teaming control, and the world, as
well as adhesive transactive part. We further propose several future research
directions to expand on the distinctive contributions of ATSA and address the
specific and pressing next steps.
Related papers
- Visioning Human-Agentic AI Teaming: Continuity, Tension, and Future Research [2.5609468301546485]
Agentic systems capable of open-ended action trajectories introduce structural uncertainty into human-AI teaming.<n>Team Situation Awareness (Team SA) theory presumes that shared awareness, once achieved, will support coordinated action through iterative updating.<n>Our argument unfolds in two stages: first, we extend Team SA to reconceptualize both human and AI awareness under open-ended agency.<n>Second, we interrogate whether the dynamic processes traditionally assumed to stabilize teaming in relational interaction, cognitive learning, and coordination and control continue to function under adaptive autonomy.
arXiv Detail & Related papers (2026-03-05T02:40:42Z) - The Rise of AI Agent Communities: Large-Scale Analysis of Discourse and Interaction on Moltbook [62.2627874717318]
Moltbook is a Reddit-like social platform where AI agents create posts and interact with other agents through comments and replies.<n>Using a public API snapshot collected about five days after launch, we address three research questions: what AI agents discuss, how they post, and how they interact.<n>We show that agents' writing is predominantly neutral, with positivity appearing in community engagement and assistance-oriented content.
arXiv Detail & Related papers (2026-02-13T05:28:31Z) - AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support [0.514825619161626]
Current AI systems remain largely passive due to an overreliance on explainability-centric designs.<n> transitioning AI to an active teammate requires adaptive, context-aware interactions.
arXiv Detail & Related papers (2026-01-26T19:18:50Z) - AI Agent Behavioral Science [29.262537008412412]
AI Agent Behavioral Science focuses on the systematic observation of behavior, design of interventions to test hypotheses, and theory-guided interpretation of how AI agents act, adapt, and interact over time.<n>We systematize a growing body of research across individual agent, multi-agent, and human-agent interaction settings, and demonstrate how this perspective informs responsible AI by treating fairness, safety, interpretability, accountability, and privacy as behavioral properties.
arXiv Detail & Related papers (2025-06-04T08:12:32Z) - Human-Centered Human-AI Collaboration (HCHAC) [9.14056952246194]
Human-AI Collaboration (HAC) represents a novel type of human-machine relationship facilitated by AI technologies.<n>Human-centered AI (HCAI) emphasizes that humans play critical leadership roles in the collaboration.<n>This chapter delves into the essence of HAC from the human-centered perspective, outlining its core concepts and distinguishing features.
arXiv Detail & Related papers (2025-05-28T15:27:52Z) - Unraveling Human-AI Teaming: A Review and Outlook [2.3396455015352258]
Artificial Intelligence (AI) is advancing at an unprecedented pace, with clear potential to enhance decision-making and productivity.
Yet, the collaborative decision-making process between humans and AI remains underdeveloped, often falling short of its transformative possibilities.
This paper explores the evolution of AI agents from passive tools to active collaborators in human-AI teams, emphasizing their ability to learn, adapt, and operate autonomously in complex environments.
arXiv Detail & Related papers (2025-04-08T07:37:25Z) - A Human Digital Twin Architecture for Knowledge-based Interactions and Context-Aware Conversations [0.9580312063277943]
Recent developments in Artificial Intelligence (AI) and Machine Learning (ML) are creating new opportunities for Human-Autonomy Teaming (HAT)
We present a real-time Human Digital Twin (HDT) architecture that integrates Large Language Models (LLMs) for knowledge reporting, answering, and recommendation, embodied in a visual interface.
The HDT acts as a visually and behaviorally realistic team member, integrated throughout the mission lifecycle, from training to deployment to after-action review.
arXiv Detail & Related papers (2025-04-04T03:56:26Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Artificial Theory of Mind and Self-Guided Social Organisation [1.8434042562191815]
One of the challenges artificial intelligence (AI) faces is how a collection of agents coordinate their behaviour to achieve goals that are not reachable by any single agent.
We make the case for collective intelligence in a general setting, drawing on recent work from single neuron complexity in neural networks.
We show how our social structures are influenced by our neuro-physiology, our psychology, and our language.
arXiv Detail & Related papers (2024-11-14T04:06:26Z) - CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - AI's Social Forcefield: Reshaping Distributed Cognition in Human-AI Teams [6.386909552513031]
We show that AI actively reshapes the social and cognitive fabric of collaboration.<n>We show that AI participation reorganizes the distributed cognitive architecture of teams.<n>We argue for rethinking AI in teams as a socially influential actor.
arXiv Detail & Related papers (2024-07-03T13:46:00Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Applying HCAI in developing effective human-AI teaming: A perspective
from human-AI joint cognitive systems [10.746728034149989]
Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems.
We elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS)
We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT.
arXiv Detail & Related papers (2023-07-08T06:26:38Z) - A Mental-Model Centric Landscape of Human-AI Symbiosis [31.14516396625931]
We introduce a significantly general version of human-aware AI interaction scheme, called generalized human-aware interaction (GHAI)
We will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
arXiv Detail & Related papers (2022-02-18T22:08:08Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) [87.3520234553785]
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence.
This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences.
arXiv Detail & Related papers (2021-02-11T05:44:25Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.