Learning Explainable Interventions to Mitigate HIV Transmission in Sex
Workers Across Five States in India
- URL: http://arxiv.org/abs/2012.01930v1
- Date: Mon, 30 Nov 2020 08:35:16 GMT
- Title: Learning Explainable Interventions to Mitigate HIV Transmission in Sex
Workers Across Five States in India
- Authors: Raghav Awasthi, Prachi Patel, Vineet Joshi, Shama Karkal, Tavpritesh
Sethi
- Abstract summary: This work combines structure learning, discriminative modeling, and grass-root level expertise of designing interventions across five different Indian states.
A bootstrapped, ensemble-averaged Bayesian Network structure was learned to quantify the factors that could maximize condom usage.
A discriminative model was then constructed using XgBoost and random forest in order to predict condom use behavior.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Female sex workers(FSWs) are one of the most vulnerable and stigmatized
groups in society. As a result, they often suffer from a lack of quality access
to care. Grassroot organizations engaged in improving health services are often
faced with the challenge of improving the effectiveness of interventions due to
complex influences. This work combines structure learning, discriminative
modeling, and grass-root level expertise of designing interventions across five
different Indian states to discover the influence of non-obvious factors for
improving safe-sex practices in FSWs. A bootstrapped, ensemble-averaged
Bayesian Network structure was learned to quantify the factors that could
maximize condom usage as revealed from the model. A discriminative model was
then constructed using XgBoost and random forest in order to predict condom use
behavior The best model achieved 83% sensitivity, 99% specificity, and 99% area
under the precision-recall curve for the prediction. Both generative and
discriminative modeling approaches revealed that financial literacy training
was the primary influence and predictor of condom use in FSWs. These insights
have led to a currently ongoing field trial for assessing the real-world
utility of this approach. Our work highlights the potential of explainable
models for transparent discovery and prioritization of anti-HIV interventions
in female sex workers in a resource-limited setting.
Related papers
- Fairness in Machine Learning-based Hand Load Estimation: A Case Study on Load Carriage Tasks [1.1674893622721483]
We developed and evaluated a fair predictive model for hand load estimation that leverages a Variational Autoencoder (VAE) with feature disentanglement.
Our proposed fair algorithm outperformed conventional machine learning methods in both fairness and predictive accuracy, achieving a lower mean absolute error (MAE) difference across male and female sets.
These findings emphasize the importance of fairness-aware machine learning algorithms to prevent potential disadvantages in workplace health and safety for certain worker populations.
arXiv Detail & Related papers (2025-04-08T01:55:40Z) - Reducing Large Language Model Safety Risks in Women's Health using Semantic Entropy [29.14930590607661]
Large language models (LLMs) generate false or misleading outputs, known as hallucinations.
Traditional methods for quantifying uncertainty, such as perplexity, fail to capture meaning-level inconsistencies that lead to misinformation.
We evaluate semantic entropy (SE), a novel uncertainty metric, to detect hallucinations in AI-generated medical content.
arXiv Detail & Related papers (2025-03-01T00:57:52Z) - African Gender Classification Using Clothing Identification Via Deep Learning [0.0]
We use the AFRIFASHION1600 dataset, a curated collection of 1,600 images of African traditional clothing labeled into two gender classes: male and female.
A deep learning model, based on a modified VGG16 architecture and trained using transfer learning, was developed for classification.
The model achieved an accuracy of 87% on the test set, demonstrating strong predictive capability despite dataset imbalances favoring female samples.
arXiv Detail & Related papers (2025-02-26T20:59:59Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Dataset Distribution Impacts Model Fairness: Single vs. Multi-Task Learning [2.9530211066840417]
We evaluate the performance of skin lesion classification using ResNet-based CNNs.
We present a linear programming method for generating datasets with varying patient sex and class labels.
arXiv Detail & Related papers (2024-07-24T15:23:26Z) - Using Pre-training and Interaction Modeling for ancestry-specific disease prediction in UK Biobank [69.90493129893112]
Recent genome-wide association studies (GWAS) have uncovered the genetic basis of complex traits, but show an under-representation of non-European descent individuals.
Here, we assess whether we can improve disease prediction across diverse ancestries using multiomic data.
arXiv Detail & Related papers (2024-04-26T16:39:50Z) - Analyzing Male Domestic Violence through Exploratory Data Analysis and Explainable Machine Learning Insights [0.5825410941577593]
Existing literature predominantly emphasizes female victimization in domestic violence scenarios, leading to an absence of research on male victims.
Our study represents a pioneering exploration of the underexplored realm of male domestic violence (MDV) within the Bangladeshi context.
Our findings challenge the prevailing notion that domestic abuse primarily affects women, thus emphasizing the need for tailored interventions and support systems for male victims.
arXiv Detail & Related papers (2024-03-22T19:53:21Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Forecasting Patient Flows with Pandemic Induced Concept Drift using
Explainable Machine Learning [0.0]
This study investigates how a suite of novel quasi-real-time variables can improve the forecasting models of patient flows.
The prevailing COVID-19 Alert Level feature together with Google search terms and pedestrian traffic were effective at producing generalisable forecasts.
arXiv Detail & Related papers (2022-11-01T20:42:26Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Improving healthcare access management by predicting patient no-show
behaviour [0.0]
This work develops a Decision Support System (DSS) to support the implementation of strategies to encourage attendance.
We assess the effectiveness of different machine learning approaches to improve the accuracy of regression models.
In addition to quantifying relationships reported in previous studies, we find that income and neighbourhood crime statistics affect no-show probabilities.
arXiv Detail & Related papers (2020-12-10T14:57:25Z) - Using Deep Learning and Explainable Artificial Intelligence in Patients'
Choices of Hospital Levels [10.985001960872264]
This study used nationwide insurance data, accumulated possible features discussed in existing literature, and used a deep neural network to predict the patients choices of hospital levels.
The results showed that the model was able to predict with high area under the receiver operating characteristics curve (AUC) (0.90), accuracy (0.90), sensitivity (0.94), and specificity (0.97) with highly imbalanced label.
arXiv Detail & Related papers (2020-06-24T02:15:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.