Developer Insights into Designing AI-Based Computer Perception Tools
- URL: http://arxiv.org/abs/2508.21733v1
- Date: Fri, 29 Aug 2025 16:01:02 GMT
- Title: Developer Insights into Designing AI-Based Computer Perception Tools
- Authors: Maya Guhan, Meghan E. Hurley, Eric A. Storch, John Herrington, Casey Zampella, Julia Parish-Morris, Gabriel Lázaro-Muñoz, Kristin Kostick-Quenet,
- Abstract summary: Artificial intelligence (AI)-based computer perception (CP) technologies use mobile sensors to collect behavioral and physiological data for clinical decision-making.<n>Our study presents findings from 20 in-depth interviews with developers of AI-based CP tools.
- Score: 0.29792635122213634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI)-based computer perception (CP) technologies use mobile sensors to collect behavioral and physiological data for clinical decision-making. These tools can reshape how clinical knowledge is generated and interpreted. However, effective integration of these tools into clinical workflows depends on how developers balance clinical utility with user acceptability and trustworthiness. Our study presents findings from 20 in-depth interviews with developers of AI-based CP tools. Interviews were transcribed and inductive, thematic analysis was performed to identify 4 key design priorities: 1) to account for context and ensure explainability for both patients and clinicians; 2) align tools with existing clinical workflows; 3) appropriately customize to relevant stakeholders for usability and acceptability; and 4) push the boundaries of innovation while aligning with established paradigms. Our findings highlight that developers view themselves as not merely technical architects but also ethical stewards, designing tools that are both acceptable by users and epistemically responsible (prioritizing objectivity and pushing clinical knowledge forward). We offer the following suggestions to help achieve this balance: documenting how design choices around customization are made, defining limits for customization choices, transparently conveying information about outputs, and investing in user training. Achieving these goals will require interdisciplinary collaboration between developers, clinicians, and ethicists.
Related papers
- Before the Clinic: Transparent and Operable Design Principles for Healthcare AI [42.994619952353396]
We propose two foundational design principles to operationalize pre-clinical technical requirements for healthcare AI.<n>We ground these principles in established XAI frameworks, map them to documented clinician needs, and demonstrate their alignment with emerging governance requirements.<n>This pre-clinical playbook provides actionable guidance for development teams, accelerates the path to clinical evaluation, and establishes a shared vocabulary bridging AI researchers, healthcare practitioners, and regulatory stakeholders.
arXiv Detail & Related papers (2025-10-31T04:05:09Z) - A Comprehensive Review of Datasets for Clinical Mental Health AI Systems [55.67299586253951]
We present the first comprehensive survey of clinical mental health datasets relevant to the training and development of AI-powered clinical assistants.<n>Our survey identifies critical gaps such as a lack of longitudinal data, limited cultural and linguistic representation, inconsistent collection and annotation standards, and a lack of modalities in synthetic data.
arXiv Detail & Related papers (2025-08-13T13:42:35Z) - Designing AI Tools for Clinical Care Teams to Support Serious Illness Conversations with Older Adults in the Emergency Department [53.52248484568777]
The work contributes empirical understanding of ED-based serious illness conversations and provides design considerations for AI in high-stakes clinical environments.<n>We conducted interviews with two domain experts and nine ED clinical care team members.<n>We characterized a four-phase serious illness conversation workflow (identification, preparation, conduction, documentation) and identified key needs and challenges at each stage.<n>We present design guidelines for AI tools supporting SIC that fit within existing clinical practices.
arXiv Detail & Related papers (2025-05-30T21:15:57Z) - Clinicians don't know what explanations they need: A case study on eliciting AI software explainability requirements [0.0]
This paper analyses how software developers elicit explainability requirements when creating a software application with an AI component.<n>Following a small software development team at a Norwegian hospital, we observe their process of simultaneously developing the AI application.<n>Since clinicians struggled to articulate their explainability needs before interacting with the system, an iterative approach proved effective.
arXiv Detail & Related papers (2025-01-16T15:17:33Z) - Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios [46.729092855387165]
We study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation.<n>Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools.
arXiv Detail & Related papers (2024-11-16T18:19:53Z) - Contrasting Attitudes Towards Current and Future AI Applications for Computerised Interpretation of ECG: A Clinical Stakeholder Interview Study [2.570550251482137]
We conducted a series of interviews with clinicians in the UK.
Our study explores the potential for AI, specifically future 'human-like' computing.
arXiv Detail & Related papers (2024-10-22T10:31:23Z) - "It depends": Configuring AI to Improve Clinical Usefulness Across Contexts [0.0]
This paper explores how to design AI for clinical usefulness in different contexts.
We conducted 19 design sessions with 13 radiologists from 7 clinical sites in Denmark and Kenya.
We conceptualised four technical dimensions that must be configured to the intended clinical context.
arXiv Detail & Related papers (2024-05-27T11:49:05Z) - Challenges for Responsible AI Design and Workflow Integration in Healthcare: A Case Study of Automatic Feeding Tube Qualification in Radiology [35.284458448940796]
Nasogastric tubes (NGTs) are feeding tubes that are inserted through the nose into the stomach to deliver nutrition or medication.
Recent AI developments demonstrate the feasibility of robustly detecting NGT placement from Chest X-ray images.
We present a human-centered approach to the problem and describe insights derived following contextual inquiry and in-depth interviews with 15 clinical stakeholders.
arXiv Detail & Related papers (2024-05-08T14:16:22Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.