Data over dialogue: Why artificial intelligence is unlikely to humanise medicine
- URL: http://arxiv.org/abs/2504.07763v1
- Date: Thu, 10 Apr 2025 14:03:40 GMT
- Title: Data over dialogue: Why artificial intelligence is unlikely to humanise medicine
- Authors: Joshua Hatherley,
- Abstract summary: I argue that medical ML systems are more likely to negatively impact these relationships than to improve them.<n>In particular, I argue that the use of medical ML systems is likely to comprise the quality of trust, care, empathy, understanding, and communication between clinicians and patients.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the quality of trust, care, empathy, understanding, and communication between clinicians and patients.
Related papers
- TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews [54.35097932763878]
Thematic analysis (TA) is a widely used qualitative approach for uncovering latent meanings in unstructured text data.<n>Here, we propose TAMA: A Human-AI Collaborative Thematic Analysis framework using Multi-Agent LLMs for clinical interviews.<n>We demonstrate that TAMA outperforms existing LLM-assisted TA approaches, achieving higher thematic hit rate, coverage, and distinctiveness.
arXiv Detail & Related papers (2025-03-26T15:58:16Z) - Limits of trust in medical AI [1.6317061277457001]
AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness.<n>Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
arXiv Detail & Related papers (2025-03-20T20:22:38Z) - Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios [46.729092855387165]
We study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation.<n>Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools.
arXiv Detail & Related papers (2024-11-16T18:19:53Z) - Safety challenges of AI in medicine in the era of large language models [23.817939398729955]
Large language models (LLMs) offer new opportunities for medical practitioners, patients, and researchers.<n>As AI and LLMs become more powerful and especially achieve superhuman performance in some medical tasks, public concerns over their safety have intensified.<n>This review examines emerging risks in AI utilization during the LLM era.
arXiv Detail & Related papers (2024-09-11T13:47:47Z) - The doctor will polygraph you now: ethical concerns with AI for fact-checking patients [0.23248585800296404]
Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors.
This raises novel ethical concerns about respect, privacy, and control over patient data.
arXiv Detail & Related papers (2024-08-15T02:55:30Z) - Trustworthy and Practical AI for Healthcare: A Guided Deferral System with Large Language Models [1.2281181385434294]
Large language models (LLMs) offer a valuable technology for various applications in healthcare.<n>Their tendency to hallucinate and the existing reliance on proprietary systems pose challenges in environments concerning critical decision-making.<n>This paper presents a novel HAIC guided deferral system that can simultaneously parse medical reports for disorder classification, and defer uncertain predictions with intelligent guidance to humans.
arXiv Detail & Related papers (2024-06-11T12:41:54Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Explainable AI applications in the Medical Domain: a systematic review [1.4419517737536707]
The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.
This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years.
arXiv Detail & Related papers (2023-08-10T08:12:17Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.