Limits of trust in medical AI
- URL: http://arxiv.org/abs/2503.16692v2
- Date: Thu, 03 Apr 2025 13:03:18 GMT
- Title: Limits of trust in medical AI
- Authors: Joshua Hatherley,
- Abstract summary: AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness.<n>Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) is expected to revolutionize the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in a variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology upon relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
Related papers
- Data over dialogue: Why artificial intelligence is unlikely to humanise medicine [1.6317061277457001]
I argue that medical ML systems are more likely to negatively impact these relationships than to improve them.
In particular, I argue that the use of medical ML systems is likely to comprise the quality of trust, care, empathy, understanding, and communication between clinicians and patients.
arXiv Detail & Related papers (2025-04-10T14:03:40Z) - Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering [51.26412822853409]
We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models.
Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs.
arXiv Detail & Related papers (2024-10-23T00:31:17Z) - Contrasting Attitudes Towards Current and Future AI Applications for Computerised Interpretation of ECG: A Clinical Stakeholder Interview Study [2.570550251482137]
We conducted a series of interviews with clinicians in the UK.
Our study explores the potential for AI, specifically future 'human-like' computing.
arXiv Detail & Related papers (2024-10-22T10:31:23Z) - The Importance of Justified Patient Trust in unlocking AI's potential in mental healthcare [0.0]
Without trust, patients may hesitate to engage with AI systems.
This paper focuses on the trust that mental health patients, as direct users, must have in AI systems.
arXiv Detail & Related papers (2024-10-14T07:50:10Z) - Leveraging Generative AI for Clinical Evidence Summarization Needs to Ensure Trustworthiness [47.51360338851017]
Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence.
The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information.
Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task.
arXiv Detail & Related papers (2023-11-19T03:29:45Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice [5.005928809654619]
AI-powered tools are increasingly matching or exceeding specialist-level performance across multiple domains.<n>These systems promise to reduce disparities in care delivery across demographic, racial, and socioeconomic boundaries.<n>The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care.
arXiv Detail & Related papers (2023-04-23T04:14:18Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - Trust and Medical AI: The challenges we face and the expertise needed to
overcome them [15.07989177980542]
Failures of medical AI could have serious consequences for clinical outcomes and the patient experience.
This article describes the major conceptual, technical, and humanistic challenges in medical AI.
It proposes a solution that hinges on the education and accreditation of new expert groups who specialize in the development, verification, and operation of medical AI technologies.
arXiv Detail & Related papers (2020-08-18T04:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.