Uncommon Belief in Rationality
- URL: http://arxiv.org/abs/2412.09407v1
- Date: Thu, 12 Dec 2024 16:12:40 GMT
- Title: Uncommon Belief in Rationality
- Authors: Qi Shi, Pavel Naumov,
- Abstract summary: This paper proposes a graph-based language for capturing higher-order beliefs that agents might have about the rationality of the other agents.
The two main contributions are a solution concept that captures the reasoning process based on a given belief structure and an efficient algorithm for compressing any belief structure into a unique minimal form.
- Score: 23.98373492872004
- License:
- Abstract: Common knowledge/belief in rationality is the traditional standard assumption in analysing interaction among agents. This paper proposes a graph-based language for capturing significantly more complicated structures of higher-order beliefs that agents might have about the rationality of the other agents. The two main contributions are a solution concept that captures the reasoning process based on a given belief structure and an efficient algorithm for compressing any belief structure into a unique minimal form.
Related papers
- Tell Me Why: Incentivizing Explanations [3.2754470919268543]
There is no known mechanism that provides incentives to elicit explanations for beliefs from agents.
Standard Bayesian models make assumptions that preempt the need for explanations.
This work argues that rationales-explanations of an agent's private information-lead to more efficient aggregation.
arXiv Detail & Related papers (2025-02-19T03:47:34Z) - Are language models rational? The case of coherence norms and belief revision [63.78798769882708]
We consider logical coherence norms as well as coherence norms tied to the strength of belief in language models.
We argue that rational norms tied to coherence do apply to some language models, but not to others.
arXiv Detail & Related papers (2024-06-05T16:36:21Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Provable Compositional Generalization for Object-Centric Learning [55.658215686626484]
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally.
arXiv Detail & Related papers (2023-10-09T01:18:07Z) - Towards Trustworthy Explanation: On Causal Rationalization [9.48539398357156]
We propose a new model of rationalization based on two causal desiderata, non-spuriousness and efficiency.
The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets.
arXiv Detail & Related papers (2023-06-25T03:34:06Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Combination of interval-valued belief structures based on belief entropy [5.221097007424518]
The paper investigates the issues of combination and normalization of interval-valued belief structures within the framework of Dempster-Shafer theory of evidence.
A new optimality approach based on uncertainty measure is developed, where the problem of combining interval-valued belief structures degenerates into combining basic probability assignments.
arXiv Detail & Related papers (2020-11-27T10:09:52Z) - On the use of evidence theory in belief base revision [0.0]
We propose the idea of credible belief base revision yielding to define two new formula-based revision operators.
These operators stem from consistent subbases maximal with respect to credibility instead of set inclusion and cardinality.
arXiv Detail & Related papers (2020-09-24T12:45:32Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.