Role of collective information in networks of quantum operating agents
- URL: http://arxiv.org/abs/2201.11008v2
- Date: Mon, 25 Apr 2022 15:03:08 GMT
- Title: Role of collective information in networks of quantum operating agents
- Authors: V.I. Yukalov, E.P. Yukalova, and D. Sornette
- Abstract summary: A network of agents is considered whose decision processes are described by the quantum decision theory.
As a result of the interplay between these three contributions, the process of choice between several alternatives is multimodal.
The information field common to all agents tends to smooth out sharp variations in the temporal behaviour of the probabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A network of agents is considered whose decision processes are described by
the quantum decision theory previously advanced by the authors. Decision making
is done by evaluating the utility of alternatives, their attractiveness, and
the available information, whose combinations form the probabilities to choose
a given alternative. As a result of the interplay between these three
contributions, the process of choice between several alternatives is
multimodal. The agents interact by exchanging information, which can take two
forms: (i) information that an agent can directly receive from another agent
and (ii) information collectively created by the members of the society. The
information field common to all agents tends to smooth out sharp variations in
the temporal behaviour of the probabilities and can even remove them. For
agents with short-term memory, the probabilities often tend to their limiting
values through strong oscillations and, for a range of parameters, these
oscillations last for ever, representing an ever lasting hesitation of the
decision makers. Switching on the information field makes the amplitude of the
oscillations smaller and even can halt the everlasting oscillations forcing the
probabilities to converge to fixed limits. The dynamic disjunction effect is
described.
Related papers
- On Bits and Bandits: Quantifying the Regret-Information Trade-off [62.64904903955711]
In interactive decision-making tasks, information can be acquired by direct interactions, through receiving indirect feedback, and from external knowledgeable sources.
We show that information from external sources, measured in bits, can be traded off for regret, measured in reward.
We introduce the first Bayesian regret lower bounds that depend on the information an agent accumulates.
arXiv Detail & Related papers (2024-05-26T14:18:38Z) - Causal Influence in Federated Edge Inference [34.487472866247586]
In this paper, we consider a setting where heterogeneous agents with connectivity are performing inference using unlabeled streaming data.
In order to overcome the uncertainty, agents cooperate with each other by exchanging their local inferences with and through a fusion center.
Various scenarios reflecting different agent participation patterns and fusion center policies are investigated.
arXiv Detail & Related papers (2024-05-02T13:06:50Z) - ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent [50.508669199496474]
We develop a ReAct-style LLM agent with the ability to reason and act upon external knowledge.
We refine the agent through a ReST-like method that iteratively trains on previous trajectories.
Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model.
arXiv Detail & Related papers (2023-12-15T18:20:15Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Bounding probabilities of causation through the causal marginal problem [12.542533707005092]
Probabilities of causation play a fundamental role in decision making in law, health care and public policy.
In many clinical trials and public policy evaluation cases, there exist independent datasets that examine the effect of a different treatment each on the same outcome variable.
Here, we outline how to significantly tighten existing bounds on the probabilities of causation, by imposing counterfactual consistency between SCMs constructed from such independent datasets.
arXiv Detail & Related papers (2023-04-04T12:16:38Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Robust Event-Driven Interactions in Cooperative Multi-Agent Learning [0.0]
We present an approach to reduce the communication required between agents in a Multi-Agent learning system by exploiting the inherent robustness of the underlying Markov Decision Process.
We compute so-called robustness surrogate functions (off-line), that give agents a conservative indication of how far their state measurements can deviate before they need to update other agents in the system.
This results in fully distributed decision functions, enabling agents to decide when it is necessary to update others.
arXiv Detail & Related papers (2022-04-07T11:00:39Z) - SMEMO: Social Memory for Trajectory Forecasting [34.542209630734234]
We present a neural network based on an end-to-end trainable working memory, which acts as an external storage.
We show that our method is capable of learning explainable cause-effect relationships between motions of different agents, obtaining state-of-the-art results on trajectory forecasting datasets.
arXiv Detail & Related papers (2022-03-23T14:40:20Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.