Personalized Detection of Cognitive Biases in Actions of Users from
Their Logs: Anchoring and Recency Biases
- URL: http://arxiv.org/abs/2206.15129v2
- Date: Fri, 1 Jul 2022 09:34:28 GMT
- Title: Personalized Detection of Cognitive Biases in Actions of Users from
Their Logs: Anchoring and Recency Biases
- Authors: Atanu R Sinha, Navita Goyal, Sunny Dhamnani, Tanay Asija, Raja K
Dubey, M V Kaarthik Raja, Georgios Theocharous
- Abstract summary: We focus on two cognitive biases - anchoring and recency.
The recognition of cognitive bias in computer science is largely in the domain of information retrieval.
We offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions.
- Score: 9.445205340175555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive biases are mental shortcuts humans use in dealing with information
and the environment, and which result in biased actions and behaviors (or,
actions), unbeknownst to themselves. Biases take many forms, with cognitive
biases occupying a central role that inflicts fairness, accountability,
transparency, ethics, law, medicine, and discrimination. Detection of biases is
considered a necessary step toward their mitigation. Herein, we focus on two
cognitive biases - anchoring and recency. The recognition of cognitive bias in
computer science is largely in the domain of information retrieval, and bias is
identified at an aggregate level with the help of annotated data. Proposing a
different direction for bias detection, we offer a principled approach along
with Machine Learning to detect these two cognitive biases from Web logs of
users' actions. Our individual user level detection makes it truly
personalized, and does not rely on annotated data. Instead, we start with two
basic principles established in cognitive psychology, use modified training of
an attention network, and interpret attention weights in a novel way according
to those principles, to infer and distinguish between these two biases. The
personalized approach allows detection for specific users who are susceptible
to these biases when performing their tasks, and can help build awareness among
them so as to undertake bias mitigation.
Related papers
- Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Mitigating Biases in Collective Decision-Making: Enhancing Performance in the Face of Fake News [4.413331329339185]
We study the influence these biases can have in the pervasive problem of fake news by evaluating human participants' capacity to identify false headlines.
By focusing on headlines involving sensitive characteristics, we gather a comprehensive dataset to explore how human responses are shaped by their biases.
We show that demographic factors, headline categories, and the manner in which information is presented significantly influence errors in human judgment.
arXiv Detail & Related papers (2024-03-11T12:08:08Z) - Cognitive Bias in Decision-Making with LLMs [19.87475562475802]
Large language models (LLMs) offer significant potential as tools to support an expanding range of decision-making tasks.
LLMs have been shown to inherit societal biases against protected groups, as well as be subject to bias functionally resembling cognitive bias.
Our work introduces BiasBuster, a framework designed to uncover, evaluate, and mitigate cognitive bias in LLMs.
arXiv Detail & Related papers (2024-02-25T02:35:56Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Towards Automatic Bias Detection in Knowledge Graphs [5.402498799294428]
We describe a framework for identifying biases in knowledge graph embeddings, based on numerical bias metrics.
We illustrate the framework with three different bias measures on the task of profession prediction.
The relations flagged as biased can then be handed to decision makers for judgement upon subsequent debiasing.
arXiv Detail & Related papers (2021-09-19T03:58:25Z) - Towards Learning an Unbiased Classifier from Biased Data via Conditional
Adversarial Debiasing [17.113618920885187]
We present a novel adversarial debiasing method, which addresses a feature that is spuriously connected to the labels of training images.
We argue by a mathematical proof that our approach is superior to existing techniques for the abovementioned bias.
Our experiments show that our approach performs better than state-of-the-art techniques on a well-known benchmark dataset with real-world images of cats and dogs.
arXiv Detail & Related papers (2021-03-10T16:50:42Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Person Perception Biases Exposed: Revisiting the First Impressions
Dataset [26.412669618149106]
This work revisits the ChaLearn First Impressions database, annotated for personality perception using pairwise comparisons via crowdsourcing.
We reveal existing person perception biases associated to perceived attributes like gender, ethnicity, age and face attractiveness.
arXiv Detail & Related papers (2020-11-30T15:41:27Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.