See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare
- URL: http://arxiv.org/abs/2410.01227v1
- Date: Wed, 2 Oct 2024 04:10:55 GMT
- Title: See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare
- Authors: Kenya S. Andrews, Mesrob I. Ohannessian, Elena Zheleva,
- Abstract summary: We use causal discovery to study the degree to which certain demographic features could lead to testimonial injustice.
One contributing feature can make a person more prone to experiencing another contributor of testimonial injustice.
This work is a first foray at using causal discovery to understand the nuanced experiences of patients in medical settings.
- Score: 10.443681644184966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In medical settings, it is critical that all who are in need of care are correctly heard and understood. When this is not the case due to prejudices a listener has, the speaker is experiencing \emph{testimonial injustice}, which, building upon recent work, we quantify by the presence of several categories of unjust vocabulary in medical notes. In this paper, we use FCI, a causal discovery method, to study the degree to which certain demographic features could lead to marginalization (e.g., age, gender, and race) by way of contributing to testimonial injustice. To achieve this, we review physicians' notes for each patient, where we identify occurrences of unjust vocabulary, along with the demographic features present, and use causal discovery to build a Structural Causal Model (SCM) relating those demographic features to testimonial injustice. We analyze and discuss the resulting SCMs to show the interaction of these factors and how they influence the experience of injustice. Despite the potential presence of some confounding variables, we observe how one contributing feature can make a person more prone to experiencing another contributor of testimonial injustice. There is no single root of injustice and thus intersectionality cannot be ignored. These results call for considering more than singular or equalized attributes of who a person is when analyzing and improving their experiences of bias and injustice. This work is thus a first foray at using causal discovery to understand the nuanced experiences of patients in medical settings, and its insights could be used to guide design principles throughout healthcare, to build trust and promote better patient care.
Related papers
- Detecting clinician implicit biases in diagnoses using proximal causal inference [17.541477183671912]
We propose a causal inference approach to detect the effect of clinician implicit biases on patient outcomes in large-scale medical data.
We test our method on real-world data from the UK Biobank.
arXiv Detail & Related papers (2025-01-27T05:48:15Z) - Belief in the Machine: Investigating Epistemological Blind Spots of Language Models [51.63547465454027]
Language models (LMs) are essential for reliable decision-making in fields like healthcare, law, and journalism.
This study systematically evaluates the capabilities of modern LMs, including GPT-4, Claude-3, and Llama-3, using a new dataset, KaBLE.
Our results reveal key limitations. First, while LMs achieve 86% accuracy on factual scenarios, their performance drops significantly with false scenarios.
Second, LMs struggle with recognizing and affirming personal beliefs, especially when those beliefs contradict factual data.
arXiv Detail & Related papers (2024-10-28T16:38:20Z) - The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs) [0.0]
ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare.
Despite their potential benefits, researchers have underscored various ethical implications.
This work aims to map the ethical landscape surrounding the current stage of deployment of LLMs in medicine and healthcare.
arXiv Detail & Related papers (2024-03-21T15:20:07Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Intersectionality and Testimonial Injustice in Medical Records [10.06051533333397]
We use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice.
We analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice.
arXiv Detail & Related papers (2023-06-20T17:22:50Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Towards Fairness in Classifying Medical Conversations into SOAP Sections [2.1485350418225244]
We identify and understand disparities in a model that classifies doctor-patient conversations into sections of a medical SOAP note.
A deeper analysis of the language in these conversations suggests these differences are related to and often attributable to the type of medical appointment.
Our findings stress the importance of understanding the disparities that may exist in the data itself and how that affects a model's ability to equally distribute benefits.
arXiv Detail & Related papers (2020-12-02T14:55:22Z) - Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings [16.136832979324467]
We pretrain deep embedding models (BERT) on medical notes from the MIMIC-III hospital dataset.
We identify dangerous latent relationships that are captured by the contextual word embeddings.
We evaluate performance gaps across different definitions of fairness on over 50 downstream clinical prediction tasks.
arXiv Detail & Related papers (2020-03-11T23:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.