Multi-AI Complex Systems in Humanitarian Response
- URL: http://arxiv.org/abs/2208.11282v2
- Date: Wed, 21 Sep 2022 21:20:37 GMT
- Title: Multi-AI Complex Systems in Humanitarian Response
- Authors: Joseph Aylett-Bullock, Miguel Luengo-Oroz
- Abstract summary: We describe how multi-AI systems can arise, even in relatively simple real-world humanitarian response scenarios, and lead to potentially emergent and erratic erroneous behavior.
This paper is designed to be a first exposition on this topic in the field of humanitarian response, raising awareness, exploring the possible landscape of this domain, and providing a starting point for future work within the wider community.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI is being increasingly used to aid response efforts to humanitarian
emergencies at multiple levels of decision-making. Such AI systems are
generally understood to be stand-alone tools for decision support, with ethical
assessments, guidelines and frameworks applied to them through this lens.
However, as the prevalence of AI increases in this domain, such systems will
begin to encounter each other through information flow networks created by
interacting decision-making entities, leading to multi-AI complex systems which
are often ill understood. In this paper we describe how these multi-AI systems
can arise, even in relatively simple real-world humanitarian response
scenarios, and lead to potentially emergent and erratic erroneous behavior. We
discuss how we can better work towards more trustworthy multi-AI systems by
exploring some of the associated challenges and opportunities, and how we can
design better mechanisms to understand and assess such systems. This paper is
designed to be a first exposition on this topic in the field of humanitarian
response, raising awareness, exploring the possible landscape of this domain,
and providing a starting point for future work within the wider community.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - A Survey on Context-Aware Multi-Agent Systems: Techniques, Challenges
and Future Directions [1.1458366773578277]
Research interest in autonomous agents is on the rise as an emerging topic.
The challenge lies in enabling these agents to learn, reason, and navigate uncertainties in dynamic environments.
Context awareness emerges as a pivotal element in fortifying multi-agent systems.
arXiv Detail & Related papers (2024-02-03T00:27:22Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - Neurosymbolic Value-Inspired AI (Why, What, and How) [8.946847190099206]
We propose a neurosymbolic computational framework called Value-Inspired AI (VAI)
VAI aims to represent and integrate various dimensions of human values.
We offer insights into the current progress made in this direction and outline potential future directions for the field.
arXiv Detail & Related papers (2023-12-15T16:33:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - On the Importance of Domain-specific Explanations in AI-based
Cybersecurity Systems (Technical Report) [7.316266670238795]
Lack of understanding of such decisions can be a major drawback in critical domains such as those related to cybersecurity.
In this paper we make three contributions: (i) proposal and discussion of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a comparative analysis of approaches in the literature on Explainable Artificial Intelligence (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; and (iii) a general architecture that can serve as a roadmap for guiding research efforts towards the development of explainable AI-based cybersecurity systems.
arXiv Detail & Related papers (2021-08-02T22:55:13Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.