Stakeholder Perspectives on Humanistic Implementation of Computer Perception in Healthcare: A Qualitative Study
- URL: http://arxiv.org/abs/2508.02550v1
- Date: Mon, 04 Aug 2025 16:01:56 GMT
- Title: Stakeholder Perspectives on Humanistic Implementation of Computer Perception in Healthcare: A Qualitative Study
- Authors: Kristin M. Kostick-Quenet, Meghan E. Hurley, Syed Ayaz, John Herrington, Casey Zampella, Julia Parish-Morris, Birkan Tunç, Gabriel Lázaro-Muñoz, J. S. Blumenthal-Barby, Eric A. Storch,
- Abstract summary: Digital phenotyping, affective computing and related passive sensing approaches offer unprecedented opportunities to personalize healthcare.<n>These tools provoke concerns about privacy, bias and the erosion of empathic, relationship-centered practice.<n>This study provides the first evidence-based account of key stakeholder perspectives on the integration of CP technologies into patient care.
- Score: 1.2144656790395498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer perception (CP) technologies (digital phenotyping, affective computing and related passive sensing approaches) offer unprecedented opportunities to personalize healthcare, but provoke concerns about privacy, bias and the erosion of empathic, relationship-centered practice. A comprehensive understanding of perceived risks, benefits, and implementation challenges from those who design, deploy and experience these tools in real-world settings remains elusive. This study provides the first evidence-based account of key stakeholder perspectives on the relational, technical, and governance challenges raised by the integration of CP technologies into patient care. We conducted in-depth, semi-structured interviews with 102 stakeholders: adolescent patients and their caregivers, frontline clinicians, technology developers, and ethics, legal, policy or philosophy scholars. Transcripts underwent thematic analysis by a multidisciplinary team; reliability was enhanced through double coding and consensus adjudication. Stakeholders articulated seven interlocking concern domains: (1) trustworthiness and data integrity; (2) patient-specific relevance; (3) utility and workflow integration; (4) regulation and governance; (5) privacy and data protection; (6) direct and indirect patient harms; and (7) philosophical critiques of reductionism. To operationalize humanistic safeguards, we propose "personalized roadmaps": co-designed plans that predetermine which metrics will be monitored, how and when feedback is shared, thresholds for clinical action, and procedures for reconciling discrepancies between algorithmic inferences and lived experience. By translating these insights into personalized roadmaps, we offer a practical framework for developers, clinicians and policymakers seeking to harness continuous behavioral data while preserving the humanistic core of care.
Related papers
- Stakeholder Perspectives on Digital Twin Implementation Challenges in Healthcare: Insights from a Provider Digital Twin Case Study [0.0]
This research investigates DT implementation challenges in healthcare by capturing the perspectives of four distinct stakeholders.<n>We conducted semi-structured interviews guided by the updated Consolidated Framework for Implementation Research (CFIR 2.0)<n>We then mapped each stakeholder group's preferences and concerns, revealing a nuanced landscape of converging and diverging perspectives.
arXiv Detail & Related papers (2025-07-31T07:57:48Z) - Ethics by Design: A Lifecycle Framework for Trustworthy AI in Medical Imaging From Transparent Data Governance to Clinically Validated Deployment [0.0]
This study aims to explore the ethical implications of AI in medical imaging.<n>It focuses on five key stages: data collection, data processing, model training, model evaluation, and deployment.<n>An analytical approach was employed to examine the ethical challenges associated with each stage of AI development.
arXiv Detail & Related papers (2025-07-06T05:28:17Z) - Designing AI Tools for Clinical Care Teams to Support Serious Illness Conversations with Older Adults in the Emergency Department [53.52248484568777]
The work contributes empirical understanding of ED-based serious illness conversations and provides design considerations for AI in high-stakes clinical environments.<n>We conducted interviews with two domain experts and nine ED clinical care team members.<n>We characterized a four-phase serious illness conversation workflow (identification, preparation, conduction, documentation) and identified key needs and challenges at each stage.<n>We present design guidelines for AI tools supporting SIC that fit within existing clinical practices.
arXiv Detail & Related papers (2025-05-30T21:15:57Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.<n>Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.<n>Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Artificial Intelligence-Driven Clinical Decision Support Systems [5.010570270212569]
The chapter emphasizes that creating trustworthy AI systems in healthcare requires careful consideration of fairness, explainability, and privacy.<n>The challenge of ensuring equitable healthcare delivery through AI is stressed, discussing methods to identify and mitigate bias in clinical predictive models.<n>The discussion advances in an analysis of privacy vulnerabilities in medical AI systems, from data leakage in deep learning models to sophisticated attacks against model explanations.
arXiv Detail & Related papers (2025-01-16T16:17:39Z) - Addressing Intersectionality, Explainability, and Ethics in AI-Driven Diagnostics: A Rebuttal and Call for Transdiciplinary Action [0.30693357740321775]
The increasing integration of artificial intelligence into medical diagnostics necessitates a critical examination of its ethical and practical implications.<n>This paper calls for a framework that balances accuracy with fairness, privacy, and inclusivity to ensure AI-driven diagnostics serve diverse populations equitably and ethically.
arXiv Detail & Related papers (2025-01-15T00:00:01Z) - Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice [1.0301404234578682]
We focus on five critical ethical concerns: justice and fairness, transparency, patient consent and confidentiality, accountability, and patient-centered and equitable care.<n>The paper explores how bias, lack of transparency, and challenges in maintaining patient trust can undermine the effectiveness and fairness of AI applications in healthcare.
arXiv Detail & Related papers (2024-11-18T00:52:22Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Safe and Interpretable Estimation of Optimal Treatment Regimes [54.257304443780434]
We operationalize a safe and interpretable framework to identify optimal treatment regimes.
Our findings support personalized treatment strategies based on a patient's medical history and pharmacological features.
arXiv Detail & Related papers (2023-10-23T19:59:10Z) - A Survey on Computer Vision based Human Analysis in the COVID-19 Era [58.79053747159797]
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals.
Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications.
These developments triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication
arXiv Detail & Related papers (2022-11-07T17:20:39Z) - IAC: A Framework for Enabling Patient Agency in the Use of AI-Enabled
Healthcare [1.0878040851638]
We present IAC (Informing, Assessment, and Consent), a framework for evaluating patient response to the introduction of AI-enabled digital technologies in healthcare settings.
The framework is composed of three core principles that guide how healthcare practitioners can inform patients about the use of AI in their healthcare.
We propose that the principles composing this framework can be translated into guidelines that improve practitioner-patient relationships and, concurrently, patient agency regarding the use of AI in healthcare.
arXiv Detail & Related papers (2021-10-29T16:13:15Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.