The Ethical Implications of Shared Medical Decision Making without
Providing Adequate Computational Support to the Care Provider and to the
Patient
- URL: http://arxiv.org/abs/2102.01811v1
- Date: Wed, 3 Feb 2021 00:30:21 GMT
- Title: The Ethical Implications of Shared Medical Decision Making without
Providing Adequate Computational Support to the Care Provider and to the
Patient
- Authors: Yuval Shahar
- Abstract summary: There is a clear need to involve patients in medical decisions.
Recent progress in Artificial Intelligence suggests adding a third agent: a computer.
Ethical physicians should exploit computational decision support technologies.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There is a clear need to involve patients in medical decisions. However,
cognitive psychological research has highlighted the cognitive limitations of
humans with respect to 1. Probabilistic assessment of the patient state and of
potential outcomes of various decisions, 2. Elicitation of the patient utility
function, and 3. Integration of the probabilistic knowledge and of patient
preferences to determine the optimal strategy. Therefore, without adequate
computational support, current shared decision models have severe ethical
deficiencies. An informed consent model unfairly transfers the responsibility
to a patient who does not have the necessary knowledge, nor the integration
capability. A paternalistic model endows with exaggerated power a physician who
might not be aware of the patient preferences, is prone to multiple cognitive
biases, and whose computational integration capability is bounded. Recent
progress in Artificial Intelligence suggests adding a third agent: a computer,
in all deliberative medical decisions: Non emergency medical decisions in which
more than one alternative exists, the patient preferences can be elicited, the
therapeutic alternatives might be influenced by these preferences, medical
knowledge exists regarding the likelihood of the decision outcomes, and there
is sufficient decision time. Ethical physicians should exploit computational
decision support technologies, neither making the decisions solely on their
own, nor shirking their duty and shifting the responsibility to patients in the
name of informed consent. The resulting three way (patient, care provider,
computer) human machine model that we suggest emphasizes the patient
preferences, the physician knowledge, and the computational integration of both
aspects, does not diminish the physician role, but rather brings out the best
in human and machine.
Related papers
- Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering [51.26412822853409]
We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models.
Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs.
arXiv Detail & Related papers (2024-10-23T00:31:17Z) - The doctor will polygraph you now: ethical concerns with AI for fact-checking patients [0.0]
Clinical artificial intelligence (AI) methods have been proposed for predicting social behaviors which could be reasonably understood from patient-reported data.
This raises ethical concerns about respect, privacy, and patient awareness/control over how their health data is used.
arXiv Detail & Related papers (2024-08-15T02:55:30Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Understanding how the use of AI decision support tools affect critical
thinking and over-reliance on technology by drug dispensers in Tanzania [0.0]
Drug shop dispensers were on AI-powered technologies when determining a differential diagnosis for a presented clinical case vignette.
We found that dispensers relied on the decision made by the AI 25 percent of the time, even when the AI provided no explanation for its decision.
arXiv Detail & Related papers (2023-02-19T05:59:06Z) - Optimal discharge of patients from intensive care via a data-driven
policy learning framework [58.720142291102135]
It is important that the patient discharge task addresses the nuanced trade-off between decreasing a patient's length of stay and the risk of readmission or even death following the discharge decision.
This work introduces an end-to-end general framework for capturing this trade-off to recommend optimal discharge timing decisions.
A data-driven approach is used to derive a parsimonious, discrete state space representation that captures a patient's physiological condition.
arXiv Detail & Related papers (2021-12-17T04:39:33Z) - Semi-Supervised Variational Reasoning for Medical Dialogue Generation [70.838542865384]
Two key characteristics are relevant for medical dialogue generation: patient states and physician actions.
We propose an end-to-end variational reasoning approach to medical dialogue generation.
A physician policy network composed of an action-classifier and two reasoning detectors is proposed for augmented reasoning ability.
arXiv Detail & Related papers (2021-05-13T04:14:35Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Using Deep Learning and Explainable Artificial Intelligence in Patients'
Choices of Hospital Levels [10.985001960872264]
This study used nationwide insurance data, accumulated possible features discussed in existing literature, and used a deep neural network to predict the patients choices of hospital levels.
The results showed that the model was able to predict with high area under the receiver operating characteristics curve (AUC) (0.90), accuracy (0.90), sensitivity (0.94), and specificity (0.97) with highly imbalanced label.
arXiv Detail & Related papers (2020-06-24T02:15:15Z) - Learning medical triage from clinicians using Deep Q-Learning [0.3111424566471944]
We present a Deep Reinforcement Learning approach to triage patients using curated clinical vignettes.
The dataset, consisting of 1374 clinical vignettes, was created by medical doctors to represent real-life cases.
We show that this approach is on a par with human performance, yielding safe triage decisions in 94% of cases, and matching expert decisions in 85% of cases.
arXiv Detail & Related papers (2020-03-28T16:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.