CAIS-DMA: A Decision-Making Assistant for Collaborative AI Systems
- URL: http://arxiv.org/abs/2311.04562v1
- Date: Wed, 8 Nov 2023 09:49:46 GMT
- Title: CAIS-DMA: A Decision-Making Assistant for Collaborative AI Systems
- Authors: Diaeddin Rimawi, Antonio Lotta, Marco Todescato, Barbara Russo
- Abstract summary: Collaborative Artificial Intelligence System (CAIS) is a cyber-physical system that learns actions in collaboration with humans to achieve a common goal.
When an event degrades the performance of CAIS (i.e., a disruptive event), this decision-making process may be hampered or even stopped.
This paper introduces a new methodology to automatically support the decision-making process in CAIS when the system experiences performance degradation after a disruptive event.
- Score: 1.4325175162807644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Collaborative Artificial Intelligence System (CAIS) is a cyber-physical
system that learns actions in collaboration with humans in a shared environment
to achieve a common goal. In particular, a CAIS is equipped with an AI model to
support the decision-making process of this collaboration. When an event
degrades the performance of CAIS (i.e., a disruptive event), this
decision-making process may be hampered or even stopped. Thus, it is of
paramount importance to monitor the learning of the AI model, and eventually
support its decision-making process in such circumstances. This paper
introduces a new methodology to automatically support the decision-making
process in CAIS when the system experiences performance degradation after a
disruptive event. To this aim, we develop a framework that consists of three
components: one manages or simulates CAIS's environment and disruptive events,
the second automates the decision-making process, and the third provides a
visual analysis of CAIS behavior. Overall, our framework automatically monitors
the decision-making process, intervenes whenever a performance degradation
occurs, and recommends the next action. We demonstrate our framework by
implementing an example with a real-world collaborative robot, where the
framework recommends the next action that balances between minimizing the
recovery time (i.e., resilience), and minimizing the energy adverse effects
(i.e., greenness).
Related papers
- R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models [50.19174067263255]
We introduce prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments.
We show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate.
arXiv Detail & Related papers (2024-09-21T18:32:44Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Synergising Human-like Responses and Machine Intelligence for Planning in Disaster Response [10.294618771570985]
We propose an attention-based cognitive architecture inspired by Dual Process Theory (DPT)
This framework integrates, in an online fashion, rapid yet (human-like) responses with the slow but optimized planning capabilities of machine intelligence.
arXiv Detail & Related papers (2024-04-15T15:47:08Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - Modeling Resilience of Collaborative AI Systems [1.869472599236422]
Collaborative Artificial Intelligence System (CAIS) performs actions in collaboration with the human to achieve a common goal.
CAISs can use a trained AI model to control human-system interaction, or they can use human interaction to dynamically learn from humans in an online fashion.
In online learning with human feedback, the AI model evolves by monitoring human interaction through the system sensors in the learning state.
Any disruptive event affecting these sensors may affect the AI model's ability to make accurate decisions and degrade the CAIS performance.
arXiv Detail & Related papers (2024-01-23T10:28:33Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - GResilience: Trading Off Between the Greenness and the Resilience of
Collaborative AI Systems [1.869472599236422]
We propose an approach to automatically evaluate CAIS recovery actions for their ability to trade-off between resilience and greenness.
Our approach aims to attack the problem from two perspectives: as a one-agent decision problem through optimization, and as a two-agent decision problem through game theory.
arXiv Detail & Related papers (2023-11-08T10:01:39Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Human AI interaction loop training: New approach for interactive
reinforcement learning [0.0]
Reinforcement Learning (RL) in various decision-making tasks of machine learning provides effective results with an agent learning from a stand-alone reward function.
RL presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards.
Imitation Learning (IL) offers a promising solution for those challenges using a teacher.
arXiv Detail & Related papers (2020-03-09T15:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.