Modeling Resilience of Collaborative AI Systems
- URL: http://arxiv.org/abs/2401.12632v1
- Date: Tue, 23 Jan 2024 10:28:33 GMT
- Title: Modeling Resilience of Collaborative AI Systems
- Authors: Diaeddin Rimawi, Antonio Liotta, Marco Todescato, Barbara Russo
- Abstract summary: Collaborative Artificial Intelligence System (CAIS) performs actions in collaboration with the human to achieve a common goal.
CAISs can use a trained AI model to control human-system interaction, or they can use human interaction to dynamically learn from humans in an online fashion.
In online learning with human feedback, the AI model evolves by monitoring human interaction through the system sensors in the learning state.
Any disruptive event affecting these sensors may affect the AI model's ability to make accurate decisions and degrade the CAIS performance.
- Score: 1.869472599236422
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A Collaborative Artificial Intelligence System (CAIS) performs actions in
collaboration with the human to achieve a common goal. CAISs can use a trained
AI model to control human-system interaction, or they can use human interaction
to dynamically learn from humans in an online fashion. In online learning with
human feedback, the AI model evolves by monitoring human interaction through
the system sensors in the learning state, and actuates the autonomous
components of the CAIS based on the learning in the operational state.
Therefore, any disruptive event affecting these sensors may affect the AI
model's ability to make accurate decisions and degrade the CAIS performance.
Consequently, it is of paramount importance for CAIS managers to be able to
automatically track the system performance to understand the resilience of the
CAIS upon such disruptive events. In this paper, we provide a new framework to
model CAIS performance when the system experiences a disruptive event. With our
framework, we introduce a model of performance evolution of CAIS. The model is
equipped with a set of measures that aim to support CAIS managers in the
decision process to achieve the required resilience of the system. We tested
our framework on a real-world case study of a robot collaborating online with
the human, when the system is experiencing a disruptive event. The case study
shows that our framework can be adopted in CAIS and integrated into the online
execution of the CAIS activities.
Related papers
- Characterizing and modeling harms from interactions with design patterns in AI interfaces [0.19116784879310028]
We argue that design features of interfaces with adaptive AI systems can have cascading impacts, driven by feedback loops.
We propose Design-Enhanced Control of AI systems (DECAI) to structure and facilitate impact assessments of AI interface designs.
arXiv Detail & Related papers (2024-04-17T13:30:45Z) - SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - CAIS-DMA: A Decision-Making Assistant for Collaborative AI Systems [1.4325175162807644]
Collaborative Artificial Intelligence System (CAIS) is a cyber-physical system that learns actions in collaboration with humans to achieve a common goal.
When an event degrades the performance of CAIS (i.e., a disruptive event), this decision-making process may be hampered or even stopped.
This paper introduces a new methodology to automatically support the decision-making process in CAIS when the system experiences performance degradation after a disruptive event.
arXiv Detail & Related papers (2023-11-08T09:49:46Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Toward Self-Learning End-to-End Dialog Systems [107.65369860922392]
We propose SL-Agent, a self-learning framework for building end-to-end dialog systems in changing environments.
SL-Agent consists of a dialog model and a pre-trained reward model to judge the quality of a system response.
Experiments show that SL-Agent can effectively adapt to new tasks using limited human corrections.
arXiv Detail & Related papers (2022-01-18T09:56:35Z) - An Augmented Reality Platform for Introducing Reinforcement Learning to
K-12 Students with Robots [10.835598738100359]
We propose an Augmented Reality (AR) system that reveals the hidden state of the learning to the human users.
This paper describes our system's design and implementation and concludes with a discussion on two directions for future work.
arXiv Detail & Related papers (2021-10-10T03:51:39Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.