Exploring Gender Disparities in Time to Diagnosis
- URL: http://arxiv.org/abs/2011.06100v2
- Date: Sun, 15 Nov 2020 02:05:21 GMT
- Title: Exploring Gender Disparities in Time to Diagnosis
- Authors: Tony Y. Sun, Oliver J. Bear Don't Walk IV, Jennifer L. Chen, Harry
Reyes Nieva, No\'emie Elhadad
- Abstract summary: We focus on time to diagnosis (TTD) by conducting two large-scale, complementary analyses among men and women.
We first find that women are consistently more likely to experience a longer TTD than men, even when presenting with the same conditions.
We explore how TTD disparities affect diagnostic performance between genders, both across and persistent to time.
- Score: 2.222417699836475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sex and gender-based healthcare disparities contribute to differences in
health outcomes. We focus on time to diagnosis (TTD) by conducting two
large-scale, complementary analyses among men and women across 29 phenotypes
and 195K patients. We first find that women are consistently more likely to
experience a longer TTD than men, even when presenting with the same
conditions. We further explore how TTD disparities affect diagnostic
performance between genders, both across and persistent to time, by evaluating
gender-agnostic disease classifiers across increasing diagnostic information.
In both fairness analyses, the diagnostic process favors men over women,
contradicting the previous observation that women may demonstrate relevant
symptoms earlier than men. These analyses suggest that TTD is an important yet
complex aspect when studying gender disparities, and warrants further
investigation.
Related papers
- Exploring Gender Differences in Chronic Pain Discussions on Reddit [0.0]
This study utilized Natural Language Processing (NLP) to analyze and gain deeper insights into individuals' pain experiences.<n>We classified posts into male and female corpora using the Hidden Attribute Model-Convolutional Neural Network (HAM-CNN)<n>Our analysis revealed linguistic differences between genders, with female posts tending to be more emotionally focused.
arXiv Detail & Related papers (2025-07-11T01:11:06Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Sex-based Disparities in Brain Aging: A Focus on Parkinson's Disease [2.1506382989223782]
Despite previous research, there remains a significant gap in understanding the function of sex in the process of brain aging in PD patients.
The T1-weighted MRI-driven brain-predicted age difference was computed in a group of 373 PD patients from the PPMI database.
In the propensity score-matched PD male group, brain-PAD was found to be associated with a decline in general cognition, a worse degree of sleep behavior disorder, reduced visuospatial, and caudate atrophy.
arXiv Detail & Related papers (2023-09-18T18:35:54Z) - Are Sex-based Physiological Differences the Cause of Gender Bias for
Chest X-ray Diagnosis? [2.1601966913620325]
We investigate the causes of gender bias in machine learning-based chest X-ray diagnosis.
In particular, we explore the hypothesis that breast tissue leads to underexposure of the lungs.
We propose a new sampling method which addresses the highly skewed distribution of recordings per patient in two widely used public datasets.
arXiv Detail & Related papers (2023-08-09T10:19:51Z) - Assessing gender fairness in EEG-based machine learning detection of
Parkinson's disease: A multi-center study [0.125828876338076]
We perform a systematic analysis of the detection ability for gender sub-groups in a multi-center setting of a previously developed ML algorithm.
We find significant differences in the PD detection ability for males and females at testing time.
We find significantly higher activity for a set of parietal and frontal EEG channels and frequency sub-bands for PD and non-PD males that might explain the differences in the PD detection ability for the gender sub-groups.
arXiv Detail & Related papers (2023-03-11T10:57:23Z) - Evaluate underdiagnosis and overdiagnosis bias of deep learning model on
primary open-angle glaucoma diagnosis in under-served patient populations [64.91773761529183]
Primary open-angle glaucoma (POAG) is the leading cause of blindness in the United States.
Deep learning has been widely used to detect POAG using fundus images.
Human bias in clinical diagnosis may be reflected and amplified in the widely-used deep learning models.
arXiv Detail & Related papers (2023-01-26T18:53:09Z) - Prediction of Gender from Longitudinal MRI data via Deep Learning on
Adolescent Data Reveals Unique Patterns Associated with Brain Structure and
Change over a Two-year Period [1.733758804432323]
We examine structural MRI data to predict gender and identify gender-related changes in brain structure.
Results demonstrate that gender prediction accuracy is exceptionally high (>97%) with training epochs >200.
It might be possible to study how the brain changes during adolescence by looking at how these changes are related to different behavioral and environmental factors.
arXiv Detail & Related papers (2022-09-15T19:57:16Z) - Assessing Group-level Gender Bias in Professional Evaluations: The Case
of Medical Student End-of-Shift Feedback [14.065979111248497]
Female physicians tend to be underrepresented in senior positions, make less money than their male counterparts and receive fewer promotions.
This work was mainly conducted by looking for specific words using fixed dictionaries such as LIWC and focused on recommendation letters.
We use a dataset of written and quantitative assessments of medical student performance on individual shifts of work, collected across multiple institutions, to investigate the extent to which gender bias exists in a day-to-day context for medical students.
arXiv Detail & Related papers (2022-06-01T05:01:36Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.