Cognitive Bias and Belief Revision
- URL: http://arxiv.org/abs/2307.05069v1
- Date: Tue, 11 Jul 2023 07:13:52 GMT
- Title: Cognitive Bias and Belief Revision
- Authors: Panagiotis Papadamos (Technical University of Denmark), Nina
Gierasimczuk (Technical University of Denmark)
- Abstract summary: We formalise three types of cognitive bias within the framework of belief revision.
These are confirmation bias, framing bias, and anchoring bias.
We investigate the reliability of biased belief revision methods in truth tracking.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we formalise three types of cognitive bias within the framework
of belief revision: confirmation bias, framing bias, and anchoring bias. We
interpret them generally, as restrictions on the process of iterated revision,
and we apply them to three well-known belief revision methods: conditioning,
lexicographic revision, and minimal revision. We investigate the reliability of
biased belief revision methods in truth tracking. We also run computer
simulations to assess the performance of biased belief revision in random
scenarios.
Related papers
- Towards debiasing code review support [1.188383832081829]
This paper explores harmful cases caused by cognitive biases during code review.
In particular, we design prototypes covering confirmation bias and decision fatigue.
We show that some techniques could be implemented in existing code review tools.
arXiv Detail & Related papers (2024-07-01T15:58:14Z) - Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation [55.66090768926881]
We study the correspondence between decontextualized "trick tests" and evaluations that are more grounded in Realistic Use and Tangible Effects.
We compare three de-contextualized evaluations adapted from the current literature to three analogous RUTEd evaluations applied to long-form content generation.
We found no correspondence between trick tests and RUTEd evaluations.
arXiv Detail & Related papers (2024-02-20T01:49:15Z) - Semantic Properties of cosine based bias scores for word embeddings [48.0753688775574]
We propose requirements for bias scores to be considered meaningful for quantifying biases.
We analyze cosine based scores from the literature with regard to these requirements.
We underline these findings with experiments to show that the bias scores' limitations have an impact in the application case.
arXiv Detail & Related papers (2024-01-27T20:31:10Z) - GenAI Mirage: The Impostor Bias and the Deepfake Detection Challenge in the Era of Artificial Illusions [6.184770966699034]
This paper examines the impact of cognitive biases on decision-making in forensics and digital forensics.
It assesses existing methods to mitigate biases and improve decision-making.
It introduces the novel "Impostor Bias", which arises as a systematic tendency to question the authenticity of multimedia content.
arXiv Detail & Related papers (2023-12-24T10:01:40Z) - Zero-shot Faithful Factual Error Correction [53.121642212060536]
Faithfully correcting factual errors is critical for maintaining the integrity of textual knowledge bases and preventing hallucinations in sequence-to-sequence models.
We present a zero-shot framework that formulates questions about input claims, looks for correct answers in the given evidence, and assesses the faithfulness of each correction based on its consistency with the evidence.
arXiv Detail & Related papers (2023-05-13T18:55:20Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Counterfactual Evaluation for Explainable AI [21.055319253405603]
We propose a new methodology to evaluate the faithfulness of explanations from the textitcounterfactual reasoning perspective.
We introduce two algorithms to find the proper counterfactuals in both discrete and continuous scenarios and then use the acquired counterfactuals to measure faithfulness.
arXiv Detail & Related papers (2021-09-05T01:38:49Z) - Uncovering Latent Biases in Text: Method and Application to Peer Review [38.726731935235584]
We introduce a novel framework to quantify bias in text caused by the visibility of subgroup membership indicators.
We apply our framework to quantify biases in the text of peer reviews from a reputed machine learning conference.
arXiv Detail & Related papers (2020-10-29T01:24:19Z) - OSCaR: Orthogonal Subspace Correction and Rectification of Biases in
Word Embeddings [47.721931801603105]
We propose OSCaR, a bias-mitigating method that focuses on disentangling biased associations between concepts instead of removing concepts wholesale.
Our experiments on gender biases show that OSCaR is a well-balanced approach that ensures that semantic information is retained in the embeddings and bias is also effectively mitigated.
arXiv Detail & Related papers (2020-06-30T18:18:13Z) - Controlling Overestimation Bias with Truncated Mixture of Continuous
Distributional Quantile Critics [65.51757376525798]
Overestimation bias is one of the major impediments to accurate off-policy learning.
This paper investigates a novel way to alleviate the overestimation bias in a continuous control setting.
Our method---Truncated Quantile Critics, TQC,---blends three ideas: distributional representation of a critic, truncation of critics prediction, and ensembling of multiple critics.
arXiv Detail & Related papers (2020-05-08T19:52:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.