GResilience: Trading Off Between the Greenness and the Resilience of
Collaborative AI Systems
- URL: http://arxiv.org/abs/2311.04569v1
- Date: Wed, 8 Nov 2023 10:01:39 GMT
- Title: GResilience: Trading Off Between the Greenness and the Resilience of
Collaborative AI Systems
- Authors: Diaeddin Rimawi, Antonio Liotta, Marco Todescato, Barbara Russo
- Abstract summary: We propose an approach to automatically evaluate CAIS recovery actions for their ability to trade-off between resilience and greenness.
Our approach aims to attack the problem from two perspectives: as a one-agent decision problem through optimization, and as a two-agent decision problem through game theory.
- Score: 1.869472599236422
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A Collaborative Artificial Intelligence System (CAIS) works with humans in a
shared environment to achieve a common goal. To recover from a disruptive event
that degrades its performance and ensures its resilience, a CAIS may then need
to perform a set of actions either by the system, by the humans, or
collaboratively together. As for any other system, recovery actions may cause
energy adverse effects due to the additional required energy. Therefore, it is
of paramount importance to understand which of the above actions can better
trade-off between resilience and greenness. In this in-progress work, we
propose an approach to automatically evaluate CAIS recovery actions for their
ability to trade-off between the resilience and greenness of the system. We
have also designed an experiment protocol and its application to a real CAIS
demonstrator. Our approach aims to attack the problem from two perspectives: as
a one-agent decision problem through optimization, which takes the decision
based on the score of resilience and greenness, and as a two-agent decision
problem through game theory, which takes the decision based on the payoff
computed for resilience and greenness as two players of a cooperative game.
Related papers
- Cooperative Resilience in Artificial Intelligence Multiagent Systems [2.0608564715600273]
This paper proposes a clear definition of cooperative resilience' and a methodology for its quantitative measurement.
The results highlight the crucial role of resilience metrics in analyzing how the collective system prepares for, resists, recovers from, sustains well-being, and transforms in the face of disruptions.
arXiv Detail & Related papers (2024-09-20T03:28:48Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Green Resilience of Cyber-Physical Systems [0.0]
Cyber-Physical System (CPS) represents systems that join both hardware and software components to perform real-time services.
The need for a recovery technique is highly needed to achieve resilience in the system.
This proposal suggests a game theory solution to achieve resilience and green in CPS.
arXiv Detail & Related papers (2023-11-09T08:29:55Z) - CAIS-DMA: A Decision-Making Assistant for Collaborative AI Systems [1.4325175162807644]
Collaborative Artificial Intelligence System (CAIS) is a cyber-physical system that learns actions in collaboration with humans to achieve a common goal.
When an event degrades the performance of CAIS (i.e., a disruptive event), this decision-making process may be hampered or even stopped.
This paper introduces a new methodology to automatically support the decision-making process in CAIS when the system experiences performance degradation after a disruptive event.
arXiv Detail & Related papers (2023-11-08T09:49:46Z) - Parametrically Retargetable Decision-Makers Tend To Seek Power [91.93765604105025]
In fully observable environments, most reward functions have an optimal policy which seeks power by keeping options open and staying alive.
We consider a range of models of AI decision-making, from optimal, to random, to choices informed by learning and interacting with an environment.
We show that a range of qualitatively dissimilar decision-making procedures incentivize agents to seek power.
arXiv Detail & Related papers (2022-06-27T17:39:23Z) - Safe adaptation in multiagent competition [48.02377041620857]
In multiagent competitive scenarios, ego-agents may have to adapt to new opponents with previously unseen behaviors.
As the ego-agent updates its own behavior to exploit the opponent, its own behavior could become more exploitable.
We develop a safe adaptation approach in which the ego-agent is trained against a regularized opponent model.
arXiv Detail & Related papers (2022-03-14T23:53:59Z) - Beyond Robustness: A Taxonomy of Approaches towards Resilient
Multi-Robot Systems [41.71459547415086]
We analyze how resilience is achieved in networks of agents and multi-robot systems.
We argue that resilience must become a central engineering design consideration.
arXiv Detail & Related papers (2021-09-25T11:25:02Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z) - Cooperative Inverse Reinforcement Learning [64.60722062217417]
We propose a formal definition of the value alignment problem as cooperative reinforcement learning (CIRL)
A CIRL problem is a cooperative, partial-information game with two agents human and robot; both are rewarded according to the human's reward function, but the robot does not initially know what this is.
In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions.
arXiv Detail & Related papers (2016-06-09T22:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.