Exploring Conversational Agents as an Effective Tool for Measuring
Cognitive Biases in Decision-Making
- URL: http://arxiv.org/abs/2401.06686v1
- Date: Mon, 8 Jan 2024 10:23:52 GMT
- Title: Exploring Conversational Agents as an Effective Tool for Measuring
Cognitive Biases in Decision-Making
- Authors: Stephen Pilli
- Abstract summary: The research aims to explore conversational agents as an effective tool to measure various cognitive biases in different domains.
Our initial experiments to measure framing and loss-aversion biases indicate that the conversational agents can be effectively used to measure the biases.
- Score: 0.65268245109828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Heuristics and cognitive biases are an integral part of human
decision-making. Automatically detecting a particular cognitive bias could
enable intelligent tools to provide better decision-support. Detecting the
presence of a cognitive bias currently requires a hand-crafted experiment and
human interpretation. Our research aims to explore conversational agents as an
effective tool to measure various cognitive biases in different domains. Our
proposed conversational agent incorporates a bias measurement mechanism that is
informed by the existing experimental designs and various experimental tasks
identified in the literature. Our initial experiments to measure framing and
loss-aversion biases indicate that the conversational agents can be effectively
used to measure the biases.
Related papers
- Designing LLM-Agents with Personalities: A Psychometric Approach [0.47498241053872914]
This research introduces a novel methodology for assigning quantifiable, controllable and psychometrically validated personalities to Agents.
It seeks to overcome the constraints of human subject studies, proposing Agents as an accessible tool for social science inquiry.
arXiv Detail & Related papers (2024-10-25T01:05:04Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Integrating Psychometrics and Computing Perspectives on Bias and
Fairness in Affective Computing: A Case Study of Automated Video Interviews [7.8034219994196174]
This paper provides an exposition of bias and fairness as applied to a typical machine learning pipeline for affective computing.
Various methods and metrics for measuring fairness and bias are discussed along with pertinent implications within the United States legal context.
arXiv Detail & Related papers (2023-05-04T08:05:05Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Personalized Detection of Cognitive Biases in Actions of Users from
Their Logs: Anchoring and Recency Biases [9.445205340175555]
We focus on two cognitive biases - anchoring and recency.
The recognition of cognitive bias in computer science is largely in the domain of information retrieval.
We offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions.
arXiv Detail & Related papers (2022-06-30T08:51:15Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z) - Studying the Effects of Cognitive Biases in Evaluation of Conversational
Agents [10.248512149493443]
We conduct a study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents.
We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias.
arXiv Detail & Related papers (2020-02-18T23:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.