Mitigating Cognitive Biases in Multi-Criteria Crowd Assessment
- URL: http://arxiv.org/abs/2407.18938v1
- Date: Wed, 10 Jul 2024 16:00:23 GMT
- Title: Mitigating Cognitive Biases in Multi-Criteria Crowd Assessment
- Authors: Shun Ito, Hisashi Kashima,
- Abstract summary: We focus on cognitive biases associated with a multi-criteria assessment in crowdsourcing.
Crowdworkers who rate targets with multiple different criteria simultaneously may provide biased responses due to prominence of some criteria or global impressions of the evaluation targets.
We propose two specific model structures for Bayesian opinion aggregation models that consider inter-criteria relations.
- Score: 22.540544209683592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Crowdsourcing is an easy, cheap, and fast way to perform large scale quality assessment; however, human judgments are often influenced by cognitive biases, which lowers their credibility. In this study, we focus on cognitive biases associated with a multi-criteria assessment in crowdsourcing; crowdworkers who rate targets with multiple different criteria simultaneously may provide biased responses due to prominence of some criteria or global impressions of the evaluation targets. To identify and mitigate such biases, we first create evaluation datasets using crowdsourcing and investigate the effect of inter-criteria cognitive biases on crowdworker responses. Then, we propose two specific model structures for Bayesian opinion aggregation models that consider inter-criteria relations. Our experiments show that incorporating our proposed structures into the aggregation model is effective to reduce the cognitive biases and help obtain more accurate aggregation results.
Related papers
- Covert Bias: The Severity of Social Views' Unalignment in Language Models Towards Implicit and Explicit Opinion [0.40964539027092917]
We evaluate the severity of bias toward a view by using a biased model in edge cases of excessive bias scenarios.
Our findings reveal a discrepancy in LLM performance in identifying implicit and explicit opinions, with a general tendency of bias toward explicit opinions of opposing stances.
The direct, incautious responses of the unaligned models suggest a need for further refinement of decisiveness.
arXiv Detail & Related papers (2024-08-15T15:23:00Z) - (De)Noise: Moderating the Inconsistency Between Human Decision-Makers [15.291993233528526]
We study whether algorithmic decision aids can be used to moderate the degree of inconsistency in human decision-making in the context of real estate appraisal.
We find that both (i) asking respondents to review their estimates in a series of algorithmically chosen pairwise comparisons and (ii) providing respondents with traditional machine advice are effective strategies for influencing human responses.
arXiv Detail & Related papers (2024-07-15T20:24:36Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Studying the Effects of Cognitive Biases in Evaluation of Conversational
Agents [10.248512149493443]
We conduct a study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents.
We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias.
arXiv Detail & Related papers (2020-02-18T23:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.