On Quantified Observability Analysis in Multiagent Systems
- URL: http://arxiv.org/abs/2310.02614v1
- Date: Wed, 4 Oct 2023 06:58:26 GMT
- Title: On Quantified Observability Analysis in Multiagent Systems
- Authors: Chunyan Mu and Jun Pang
- Abstract summary: This paper presents a novel approach to quantitatively analysing the observability properties in multiagent systems (MASs)
The concept of opacity is applied to formally express the characterisation of observability in MASs modelled as partially observable multiagent systems.
We propose a temporal logic oPATL to reason about agents' observability with quantitative goals, which capture the probability of information transparency of system behaviours to an observer.
- Score: 5.620321106679634
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In multiagent systems (MASs), agents' observation upon system behaviours may
improve the overall team performance, but may also leak sensitive information
to an observer. A quantified observability analysis can thus be useful to
assist decision-making in MASs by operators seeking to optimise the
relationship between performance effectiveness and information exposure through
observations in practice. This paper presents a novel approach to
quantitatively analysing the observability properties in MASs. The concept of
opacity is applied to formally express the characterisation of observability in
MASs modelled as partially observable multiagent systems. We propose a temporal
logic oPATL to reason about agents' observability with quantitative goals,
which capture the probability of information transparency of system behaviours
to an observer, and develop verification techniques for quantitatively
analysing such properties. We implement the approach as an extension of the
PRISM model checker, and illustrate its applicability via several examples.
Related papers
- Observer-Aware Probabilistic Planning Under Partial Observability [3.8506666685467343]
Building on observer-aware Markov decision processes (OAMDPs), we propose a framework to handle partial observability problems.
This extension of OAMDPs to partial observability can not only handle more realistic problems, but also permits considering dynamic hidden variables of interest.
arXiv Detail & Related papers (2025-02-14T21:41:04Z) - AgentOps: Enabling Observability of LLM Agents [12.49728300301026]
Large language model (LLM) agents raise significant concerns on AI safety due to their autonomous and non-deterministic behavior.
We present a comprehensive taxonomy of AgentOps, identifying the artifacts and associated data that should be traced throughout the entire lifecycle of agents to achieve effective observability.
Our taxonomy serves as a reference template for developers to design and implement AgentOps infrastructure that supports monitoring, logging, and analytics.
arXiv Detail & Related papers (2024-11-08T02:31:03Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents [74.16170899755281]
We introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents.
AgentBoard offers a fine-grained progress rate metric that captures incremental advancements as well as a comprehensive evaluation toolkit.
This not only sheds light on the capabilities and limitations of LLM agents but also propels the interpretability of their performance to the forefront.
arXiv Detail & Related papers (2024-01-24T01:51:00Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - Communicating Uncertainty in Machine Learning Explanations: A
Visualization Analytics Approach for Predictive Process Monitoring [0.0]
This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches.
By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability.
arXiv Detail & Related papers (2023-04-12T09:44:32Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models [76.48370548802464]
This paper focuses on conducting a series of analytical experiments to examine the relations between the multi-head self-attention and the final MRC system performance.
We discover that passage-to-question and passage understanding attentions are the most important ones in the question answering process.
Through comprehensive visualizations and case studies, we also observe several general findings on the attention maps, which can be helpful to understand how these models solve the questions.
arXiv Detail & Related papers (2021-08-26T04:23:57Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.