Intersectionality and Testimonial Injustice in Medical Records
- URL: http://arxiv.org/abs/2306.13675v1
- Date: Tue, 20 Jun 2023 17:22:50 GMT
- Title: Intersectionality and Testimonial Injustice in Medical Records
- Authors: Kenya S. Andrews and Bhuvani Shah and Lu Cheng
- Abstract summary: We use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice.
We analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice.
- Score: 10.06051533333397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting testimonial injustice is an essential element of addressing
inequities and promoting inclusive healthcare practices, many of which are
life-critical. However, using a single demographic factor to detect testimonial
injustice does not fully encompass the nuanced identities that contribute to a
patient's experience. Further, some injustices may only be evident when
examining the nuances that arise through the lens of intersectionality.
Ignoring such injustices can result in poor quality of care or life-endangering
events. Thus, considering intersectionality could result in more accurate
classifications and just decisions. To illustrate this, we use real-world
medical data to determine whether medical records exhibit words that could lead
to testimonial injustice, employ fairness metrics (e.g. demographic parity,
differential intersectional fairness, and subgroup fairness) to assess the
severity to which subgroups are experiencing testimonial injustice, and analyze
how the intersectionality of demographic features (e.g. gender and race) make a
difference in uncovering testimonial injustice. From our analysis, we found
that with intersectionality we can better see disparities in how subgroups are
treated and there are differences in how someone is treated based on the
intersection of their demographic attributes. This has not been previously
studied in clinical records, nor has it been proven through empirical study.
Related papers
- A Tutorial On Intersectionality in Fair Rankings [1.4883782513177093]
biases can lead to discriminatory outcomes in a data-driven world.
Efforts towards responsible data science and responsible artificial intelligence aim to mitigate these biases.
arXiv Detail & Related papers (2025-02-07T21:14:21Z) - Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs [17.424396781457975]
We argue that in a range of important settings, group difference awareness matters.
We present a benchmark suite composed of eight different scenarios for a total of 16k questions.
We show results across ten models that demonstrate difference awareness is a distinct dimension of fairness.
arXiv Detail & Related papers (2025-02-04T01:56:28Z) - See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare [10.443681644184966]
We use causal discovery to study the degree to which certain demographic features could lead to testimonial injustice.
One contributing feature can make a person more prone to experiencing another contributor of testimonial injustice.
This work is a first foray at using causal discovery to understand the nuanced experiences of patients in medical settings.
arXiv Detail & Related papers (2024-10-02T04:10:55Z) - Towards Fair Patient-Trial Matching via Patient-Criterion Level Fairness
Constraint [50.35075018041199]
This work proposes a fair patient-trial matching framework by generating a patient-criterion level fairness constraint.
The experimental results on real-world patient-trial and patient-criterion matching tasks demonstrate that the proposed framework can successfully alleviate the predictions that tend to be biased.
arXiv Detail & Related papers (2023-03-24T03:59:19Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.