Pluralistic Alignment for Healthcare: A Role-Driven Framework
- URL: http://arxiv.org/abs/2509.10685v2
- Date: Thu, 18 Sep 2025 16:57:40 GMT
- Title: Pluralistic Alignment for Healthcare: A Role-Driven Framework
- Authors: Jiayou Zhong, Anudeex Shetty, Chao Jia, Xuanrui Lin, Usman Naseem,
- Abstract summary: We propose a first lightweight, generalizable, pluralistic alignment approach, EthosAgents, to simulate diverse perspectives and values.<n>We empirically show that it advances the pluralistic alignment for all three modes across seven varying-sized open and closed models.<n>Our findings reveal that health-related pluralism demands adaptable and normatively aware approaches, offering insights into how these models can better respect diversity in other high-stakes domains.
- Score: 14.636276754192219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As large language models are increasingly deployed in sensitive domains such as healthcare, ensuring their outputs reflect the diverse values and perspectives held across populations is critical. However, existing alignment approaches, including pluralistic paradigms like Modular Pluralism, often fall short in the health domain, where personal, cultural, and situational factors shape pluralism. Motivated by the aforementioned healthcare challenges, we propose a first lightweight, generalizable, pluralistic alignment approach, EthosAgents, designed to simulate diverse perspectives and values. We empirically show that it advances the pluralistic alignment for all three modes across seven varying-sized open and closed models. Our findings reveal that health-related pluralism demands adaptable and normatively aware approaches, offering insights into how these models can better respect diversity in other high-stakes domains.
Related papers
- VISPA: Pluralistic Alignment via Automatic Value Selection and Activation [82.8405077104797]
We introduce VISPA, a training-free pluralistic alignment framework.<n>We show VISPA is performant across all pluralistic alignment modes in healthcare and beyond.
arXiv Detail & Related papers (2026-01-19T06:38:52Z) - When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models [75.16145284285456]
We introduce VLA-Fool, a comprehensive study of multimodal adversarial robustness in embodied VLA models under both white-box and black-box settings.<n>We develop the first automatically crafted and semantically guided prompting framework.<n> Experiments on the LIBERO benchmark reveal that even minor multimodal perturbations can cause significant behavioral deviations.
arXiv Detail & Related papers (2025-11-20T10:14:32Z) - Towards Low-Resource Alignment to Diverse Perspectives with Sparse Feedback [13.065059683491958]
We aim to enhance pluralistic alignment of language models in a low-resource setting with two methods: pluralistic decoding and model steering.<n>Our proposed methods decrease false positives in several high-stakes tasks such as hate speech detection and misinformation detection.<n>We hope our work highlights the importance of diversity and how language models can be adapted to consider nuanced perspectives.
arXiv Detail & Related papers (2025-10-17T23:06:21Z) - Towards deployment-centric multimodal AI beyond vision and language [69.58738352730103]
We advocate a deployment-centric workflow that incorporates deployment constraints early to reduce the likelihood of undeployable solutions.<n>We identify common multimodal-AI-specific challenges shared across disciplines and examine three real-world use cases.<n>By fostering multidisciplinary dialogue and open research practices, our community can accelerate deployment-centric development for broad societal impact.
arXiv Detail & Related papers (2025-04-04T17:20:05Z) - VITAL: A New Dataset for Benchmarking Pluralistic Alignment in Healthcare [9.087074203425061]
Existing alignment paradigms fail to account for the diversity of perspectives across cultures, demographics, and communities.<n>This is particularly critical in health-related scenarios, where plurality is essential due to the influence of culture, religion, personal values, and conflicting opinions.<n>This work highlights the limitations of current approaches and lays the groundwork for developing health-specific alignment solutions.
arXiv Detail & Related papers (2025-02-19T14:38:57Z) - From No to Know: Taxonomy, Challenges, and Opportunities for Negation Understanding in Multimodal Foundation Models [48.68342037881584]
Negation, a linguistic construct conveying absence, denial, or contradiction, poses significant challenges for multilingual multimodal foundation models.<n>We propose a comprehensive taxonomy of negation constructs, illustrating how structural, semantic, and cultural factors influence multimodal foundation models.<n>We advocate for specialized benchmarks, language-specific tokenization, fine-grained attention mechanisms, and advanced multimodal architectures.
arXiv Detail & Related papers (2025-02-10T16:55:13Z) - Towards a Universal 3D Medical Multi-modality Generalization via Learning Personalized Invariant Representation [35.5423842780382]
Existing methods often concentrate exclusively on common anatomical patterns, neglecting individual differences.<n>We propose a two-stage approach: pre-training with invariant representation $mathbbX_h$ for personalization, then fine-tuning for diverse downstream tasks.<n>Our approach yields greater generalizability and transferability across diverse multi-modal medical tasks compared to methods lacking personalization.
arXiv Detail & Related papers (2024-11-09T08:00:50Z) - Towards Building Multilingual Language Model for Medicine [54.1382395897071]
We construct a multilingual medical corpus, containing approximately 25.5B tokens encompassing 6 main languages.
We propose a multilingual medical multi-choice question-answering benchmark with rationale, termed as MMedBench.
Our final model, MMed-Llama 3, with only 8B parameters, achieves superior performance compared to all other open-source models on both MMedBench and English benchmarks.
arXiv Detail & Related papers (2024-02-21T17:47:20Z) - A Roadmap to Pluralistic Alignment [49.29107308098236]
We propose a roadmap to pluralistic alignment, specifically using language models as a test bed.
We identify and formalize three possible ways to define and operationalize pluralism in AI systems.
We argue that current alignment techniques may be fundamentally limited for pluralistic AI.
arXiv Detail & Related papers (2024-02-07T18:21:17Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.