Expansion of situations theory for exploring shared awareness in human-intelligent autonomous systems
- URL: http://arxiv.org/abs/2406.04956v1
- Date: Fri, 7 Jun 2024 14:21:01 GMT
- Title: Expansion of situations theory for exploring shared awareness in human-intelligent autonomous systems
- Authors: Scott A. Humr, Mustafa Canan, Mustafa Demir,
- Abstract summary: Intelligent autonomous systems' lack of shared situation awareness adversely influences team effectiveness in complex task environments.
A complementary approach of shared situation awareness, called situations theory, is beneficial for understanding the relationship between system of systems shared situation awareness and effectiveness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent autonomous systems are part of a system of systems that interact with other agents to accomplish tasks in complex environments. However, intelligent autonomous systems integrated system of systems add additional layers of complexity based on their limited cognitive processes, specifically shared situation awareness that allows a team to respond to novel tasks. Intelligent autonomous systems' lack of shared situation awareness adversely influences team effectiveness in complex task environments, such as military command-and-control. A complementary approach of shared situation awareness, called situations theory, is beneficial for understanding the relationship between system of systems shared situation awareness and effectiveness. The current study elucidates a conceptual discussion on situations theory to investigate the development of an system of systems shared situational awareness when humans team with intelligent autonomous system agents. To ground the discussion, the reviewed studies expanded situations theory within the context of a system of systems that result in three major conjectures that can be beneficial to the design and development of future systems of systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Toward an Ontology for Third Generation Systems Thinking [0.0]
Systems thinking is a way of making sense about the world in terms of multilevel, nested, interacting systems, their environment, and the boundaries between the systems and the environment.
arXiv Detail & Related papers (2023-10-17T18:46:11Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - Multiscale Governance [0.0]
Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
arXiv Detail & Related papers (2021-04-06T19:23:44Z) - On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) [87.3520234553785]
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence.
This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences.
arXiv Detail & Related papers (2021-02-11T05:44:25Z) - Conceptualization and Framework of Hybrid Intelligence Systems [0.0]
This article provides a precise definition of hybrid intelligence systems and explains its relation with other similar concepts.
We argue that all AI systems are hybrid intelligence systems, so human factors need to be examined at every stage of such systems' lifecycle.
arXiv Detail & Related papers (2020-12-11T06:42:06Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A Structured Approach to Trustworthy Autonomous/Cognitive Systems [4.56877715768796]
There is no generally accepted approach to ensure trustworthiness.
This paper presents a framework to exactly fill this gap.
It proposes a reference lifecycle as a structured approach that is based on current safety standards.
arXiv Detail & Related papers (2020-02-19T14:36:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.