NLP for Maternal Healthcare: Perspectives and Guiding Principles in the
Age of LLMs
- URL: http://arxiv.org/abs/2312.11803v2
- Date: Tue, 23 Jan 2024 19:37:20 GMT
- Title: NLP for Maternal Healthcare: Perspectives and Guiding Principles in the
Age of LLMs
- Authors: Maria Antoniak, Aakanksha Naik, Carla S. Alvarado, Lucy Lu Wang, Irene
Y. Chen
- Abstract summary: We propose a set of guiding principles for the use of NLP in maternal healthcare.
We surveyed healthcare workers and birthing people about their values, needs, and perceptions of NLP tools.
For each principle, we describe its underlying rationale and provide practical advice.
- Score: 13.090847961966679
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ethical frameworks for the use of natural language processing (NLP) are
urgently needed to shape how large language models (LLMs) and similar tools are
used for healthcare applications. Healthcare faces existing challenges
including the balance of power in clinician-patient relationships, systemic
health disparities, historical injustices, and economic constraints. Drawing
directly from the voices of those most affected, and focusing on a case study
of a specific healthcare setting, we propose a set of guiding principles for
the use of NLP in maternal healthcare. We led an interactive session centered
on an LLM-based chatbot demonstration during a full-day workshop with 39
participants, and additionally surveyed 30 healthcare workers and 30 birthing
people about their values, needs, and perceptions of NLP tools in the context
of maternal health. We conducted quantitative and qualitative analyses of the
survey results and interactive discussions to consolidate our findings into a
set of guiding principles. We propose nine principles for ethical use of NLP
for maternal healthcare, grouped into three themes: (i) recognizing contextual
significance (ii) holistic measurements, and (iii) who/what is valued. For each
principle, we describe its underlying rationale and provide practical advice.
This set of principles can provide a methodological pattern for other
researchers and serve as a resource to practitioners working on maternal health
and other healthcare fields to emphasize the importance of technical nuance,
historical context, and inclusive design when developing NLP technologies for
clinical use.
Related papers
- Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications [59.721265428780946]
Large Language Models (LLMs) in medicine have enabled impressive capabilities, yet a critical gap remains in their ability to perform systematic, transparent, and verifiable reasoning.<n>This paper provides the first systematic review of this emerging field.<n>We propose a taxonomy of reasoning enhancement techniques, categorized into training-time strategies and test-time mechanisms.
arXiv Detail & Related papers (2025-08-01T14:41:31Z) - Med-CoDE: Medical Critique based Disagreement Evaluation Framework [72.42301910238861]
The reliability and accuracy of large language models (LLMs) in medical contexts remain critical concerns.
Current evaluation methods often lack robustness and fail to provide a comprehensive assessment of LLM performance.
We propose Med-CoDE, a specifically designed evaluation framework for medical LLMs to address these challenges.
arXiv Detail & Related papers (2025-04-21T16:51:11Z) - Quantifying the Reasoning Abilities of LLMs on Real-world Clinical Cases [48.87360916431396]
We introduce MedR-Bench, a benchmarking dataset of 1,453 structured patient cases, annotated with reasoning references.
We propose a framework encompassing three critical examination recommendation, diagnostic decision-making, and treatment planning, simulating the entire patient care journey.
Using this benchmark, we evaluate five state-of-the-art reasoning LLMs, including DeepSeek-R1, OpenAI-o3-mini, and Gemini-2.0-Flash Thinking, etc.
arXiv Detail & Related papers (2025-03-06T18:35:39Z) - MedEthicEval: Evaluating Large Language Models Based on Chinese Medical Ethics [30.129774371246086]
This paper introduces MedEthicEval, a novel benchmark designed to evaluate large language models (LLMs) in the domain of medical ethics.
Our framework encompasses two key components: knowledge, assessing the models' grasp of medical ethics principles, and application, focusing on their ability to apply these principles across diverse scenarios.
arXiv Detail & Related papers (2025-03-04T08:01:34Z) - Natural Language-Assisted Multi-modal Medication Recommendation [97.07805345563348]
We introduce the Natural Language-Assisted Multi-modal Medication Recommendation(NLA-MMR)
The NLA-MMR is a multi-modal alignment framework designed to learn knowledge from the patient view and medication view jointly.
In this vein, we employ pretrained language models(PLMs) to extract in-domain knowledge regarding patients and medications.
arXiv Detail & Related papers (2025-01-13T09:51:50Z) - Participatory Assessment of Large Language Model Applications in an Academic Medical Center [1.244412242301951]
Large Language Models (LLMs) have shown promising performance in healthcare-related applications.
Their deployment in the medical domain poses unique challenges of ethical, regulatory, and technical nature.
arXiv Detail & Related papers (2024-12-09T21:45:35Z) - Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Large language models in healthcare and medical domain: A review [4.456243157307507]
Large language models (LLMs) provide proficient responses to free-text queries.
This review explores the potential of LLMs to amplify the efficiency and effectiveness of diverse healthcare applications.
arXiv Detail & Related papers (2023-12-12T20:54:51Z) - Foundation Metrics for Evaluating Effectiveness of Healthcare
Conversations Powered by Generative AI [38.497288024393065]
Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process.
This paper explores state-of-the-art evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare.
arXiv Detail & Related papers (2023-09-21T19:36:48Z) - Are Large Language Models Ready for Healthcare? A Comparative Study on
Clinical Language Understanding [12.128991867050487]
Large language models (LLMs) have made significant progress in various domains, including healthcare.
In this study, we evaluate state-of-the-art LLMs within the realm of clinical language understanding tasks.
arXiv Detail & Related papers (2023-04-09T16:31:47Z) - Large Language Models for Healthcare Data Augmentation: An Example on
Patient-Trial Matching [49.78442796596806]
We propose an innovative privacy-aware data augmentation approach for patient-trial matching (LLM-PTM)
Our experiments demonstrate a 7.32% average improvement in performance using the proposed LLM-PTM method, and the generalizability to new data is improved by 12.12%.
arXiv Detail & Related papers (2023-03-24T03:14:00Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation [48.87254340298189]
We construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches.
We propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-04T06:09:02Z) - A Systematic Review of Natural Language Processing for Knowledge
Management in Healthcare [0.6193838300896449]
The objective of this paper is to identify the potential of NLP, especially, how NLP is used to support the knowledge management process in the healthcare domain.
This paper provides a comprehensive survey of the state-of-the-art NLP research with a particular focus on how knowledge is created, captured, shared, and applied in the healthcare domain.
arXiv Detail & Related papers (2020-07-17T17:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.