Wigner and friends, a map is not the territory! Contextuality in multi-agent paradoxes
- URL: http://arxiv.org/abs/2305.07792v4
- Date: Wed, 17 Apr 2024 20:13:58 GMT
- Title: Wigner and friends, a map is not the territory! Contextuality in multi-agent paradoxes
- Authors: Sidiney B. Montanhano,
- Abstract summary: Multi-agent scenarios can show contradictory results when a non-classical formalism must deal with the knowledge between agents.
Even if knowledge is treated in a relational way with the concept of trust, contradictory results can still be found in multi-agent scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent scenarios, like Wigner's friend and Frauchiger-Renner scenarios, can show contradictory results when a non-classical formalism must deal with the knowledge between agents. Such paradoxes are described with multi-modal logic as violations of the structure in classical logic. Even if knowledge is treated in a relational way with the concept of trust, contradictory results can still be found in multi-agent scenarios. Contextuality deals with global inconsistencies in empirical models defined on measurement scenarios even when there is local consistency. In the present work, we take a step further to treat the scenarios in full relational language by using knowledge operators, thus showing that trust is equivalent to the Truth Axiom in these cases. A translation of measurement scenarios into multi-agent scenarios by using the topological semantics of multi-modal logic is constructed, demonstrating that logical contextuality can be understood as the violation of soundness by supposing mutual knowledge. To address the contradictions, assuming distributed knowledge is considered, which eliminates such violations but at the cost of lambda-dependence. We conclude by translating the main examples of multi-agent scenarios to their empirical model representation, contextuality is identified as the cause of their contradictory results.
Related papers
- QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios [15.193544498311603]
We present QUITE, a dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships.
We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types.
Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning.
arXiv Detail & Related papers (2024-10-14T12:44:59Z) - LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements [59.71218039095155]
Task of reading comprehension (RC) provides a primary means to assess language models' natural language understanding (NLU) capabilities.
If the context aligns with the models' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from internal information.
To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities.
arXiv Detail & Related papers (2024-04-09T13:08:56Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Generative Interpretation [0.0]
We introduce generative interpretation, a new approach to estimating contractual meaning using large language models.
We show that AI models can help factfinders ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties' agreements.
arXiv Detail & Related papers (2023-08-14T02:59:27Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - A general framework for consistent logical reasoning in Wigner's friend
scenarios: subjective perspectives of agents within a single quantum circuit [0.0]
We show that every logical Wigner's friend scenario can be mapped to a single temporally ordered quantum circuit.
Our results establish that universal applicability of quantum theory does not pose any threat to multi-agent logical reasoning.
arXiv Detail & Related papers (2022-09-19T18:13:42Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Multi-Agent Systems based on Contextual Defeasible Logic considering
Focus [0.0]
We extend previous work on distributed reasoning using Contextual Defeasible Logic (CDL)
This work presents a multi-agent model based on CDL that allows agents to reason with their local knowledge bases and mapping rules.
We present a use case scenario, some formalisations of the model proposed, and an initial implementation based on the BDI (Belief-Desire-Intention) agent model.
arXiv Detail & Related papers (2020-10-01T01:50:08Z) - Failures of Contingent Thinking [2.055949720959582]
We show that a wide range of behavior observed in experimental settings manifest as failures to perceive implications.
We show that an agent's account of implication identifies a subjective state-space that underlies her behavior.
arXiv Detail & Related papers (2020-07-15T14:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.