Diagnosis of Acute Poisoning Using Explainable Artificial Intelligence
- URL: http://arxiv.org/abs/2102.01116v1
- Date: Mon, 1 Feb 2021 19:16:59 GMT
- Title: Diagnosis of Acute Poisoning Using Explainable Artificial Intelligence
- Authors: Michael Chary, Ed W Boyer, Michele M Burns
- Abstract summary: We construct a probabilistic logic network to represent a portion of the knowledge base of a medical toxicologist.
The software, dubbed Tak, performs comparably to humans on straightforward cases and intermediate difficulty cases, but is outperformed by humans on challenging clinical cases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical toxicology is the clinical specialty that treats the toxic effects of
substances, be it an overdose, a medication error, or a scorpion sting. The
volume of toxicological knowledge and research has, as with other medical
specialties, outstripped the ability of the individual clinician to entirely
master and stay current with it. The application of machine learning techniques
to medical toxicology is challenging because initial treatment decisions are
often based on a few pieces of textual data and rely heavily on prior
knowledge. ML techniques often do not represent knowledge in a way that is
transparent for the physician, raising barriers to usability. Rule-based
systems and decision tree learning are more transparent approaches, but often
generalize poorly and require expert curation to implement and maintain. Here,
we construct a probabilistic logic network to represent a portion of the
knowledge base of a medical toxicologist. Our approach transparently mimics the
knowledge representation and clinical decision-making of practicing clinicians.
The software, dubbed Tak, performs comparably to humans on straightforward
cases and intermediate difficulty cases, but is outperformed by humans on
challenging clinical cases. Tak outperforms a decision tree classifier at all
levels of difficulty. Probabilistic logic provides one form of explainable
artificial intelligence that may be more acceptable for use in healthcare, if
it can achieve acceptable levels of performance.
Related papers
- TrialBench: Multi-Modal Artificial Intelligence-Ready Clinical Trial Datasets [57.067409211231244]
This paper presents meticulously curated AIready datasets covering multi-modal data (e.g., drug molecule, disease code, text, categorical/numerical features) and 8 crucial prediction challenges in clinical trial design.
We provide basic validation methods for each task to ensure the datasets' usability and reliability.
We anticipate that the availability of such open-access datasets will catalyze the development of advanced AI approaches for clinical trial design.
arXiv Detail & Related papers (2024-06-30T09:13:10Z) - Deriving Hematological Disease Classes Using Fuzzy Logic and Expert Knowledge: A Comprehensive Machine Learning Approach with CBC Parameters [0.49998148477760973]
This paper introduces a novel approach by leveraging Fuzzy Logic Rules to derive disease classes based on expert domain knowledge.
We harness Fuzzy Logic Rules, a computational technique celebrated for its ability to handle ambiguity.
Preliminary results showcase high accuracy levels, underscoring the advantages of integrating fuzzy logic into the diagnostic process.
arXiv Detail & Related papers (2024-06-18T19:16:32Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Defining Expertise: Applications to Treatment Effect Estimation [58.7977683502207]
We argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation.
We define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset.
arXiv Detail & Related papers (2024-03-01T17:30:49Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - NeuralSympCheck: A Symptom Checking and Disease Diagnostic Neural Model
with Logic Regularization [59.15047491202254]
symptom checking systems inquire users for their symptoms and perform a rapid and affordable medical assessment of their condition.
We propose a new approach based on the supervised learning of neural models with logic regularization.
Our experiments show that the proposed approach outperforms the best existing methods in the accuracy of diagnosis when the number of diagnoses and symptoms is large.
arXiv Detail & Related papers (2022-06-02T07:57:17Z) - Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods [3.3918638314432936]
Deep neural networks have shown same or better performance than clinicians in many tasks.
Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process.
There is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.
arXiv Detail & Related papers (2021-11-01T01:42:26Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Decision Support for Intoxication Prediction Using Graph Convolutional
Networks [34.73713173968106]
We propose a new machine learning based CADx method which fuses symptoms and meta information of the patients using graph convolutional networks.
We validate our method against 10 medical doctors with different experience diagnosing intoxication cases for 10 different toxins from the PCC in Munich.
arXiv Detail & Related papers (2020-05-02T14:20:32Z) - Learning medical triage from clinicians using Deep Q-Learning [0.3111424566471944]
We present a Deep Reinforcement Learning approach to triage patients using curated clinical vignettes.
The dataset, consisting of 1374 clinical vignettes, was created by medical doctors to represent real-life cases.
We show that this approach is on a par with human performance, yielding safe triage decisions in 94% of cases, and matching expert decisions in 85% of cases.
arXiv Detail & Related papers (2020-03-28T16:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.