Generalization of Artificial Intelligence Models in Medical Imaging: A
Case-Based Review
- URL: http://arxiv.org/abs/2211.13230v1
- Date: Tue, 15 Nov 2022 10:09:51 GMT
- Title: Generalization of Artificial Intelligence Models in Medical Imaging: A
Case-Based Review
- Authors: Rishi Gadepally, Andrew Gomella, Eric Gingold, Paras Lakhani
- Abstract summary: It is important for practicing radiologists to understand the pitfalls of various AI algorithms.
Use of AI should be preceded by a fundamental understanding of the risks and benefits to those it is intended to help.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discussions around Artificial Intelligence (AI) and medical imaging are
centered around the success of deep learning algorithms. As new algorithms
enter the market, it is important for practicing radiologists to understand the
pitfalls of various AI algorithms. This entails having a basic understanding of
how algorithms are developed, the kind of data they are trained on, and the
settings in which they will be deployed. As with all new technologies, use of
AI should be preceded by a fundamental understanding of the risks and benefits
to those it is intended to help. This case-based review is intended to point
out specific factors practicing radiologists who intend to use AI should
consider.
Related papers
- The Limits of Fair Medical Imaging AI In The Wild [43.97266228706059]
We investigate the extent to which medical AI utilizes demographic encodings.
We confirm that medical imaging AI leverages demographic shortcuts in disease classification.
We find that models with less encoding of demographic attributes are often most "globally optimal"
arXiv Detail & Related papers (2023-12-11T18:59:50Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - On Explainability in AI-Solutions: A Cross-Domain Survey [4.394025678691688]
In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans.
The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions.
This work provides an extensive survey of literature on this topic, which, to a large part, consists of other surveys.
arXiv Detail & Related papers (2022-10-11T06:21:47Z) - Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning [0.0]
This study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM)
Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested.
The research results greatly help pathologists in the diagnosis of paratuberculosis.
arXiv Detail & Related papers (2022-08-02T18:05:26Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Deep Algorithm Unrolling for Biomedical Imaging [99.73317152134028]
In this chapter, we review biomedical applications and breakthroughs via leveraging algorithm unrolling.
We trace the origin of algorithm unrolling and provide a comprehensive tutorial on how to unroll iterative algorithms into deep networks.
We conclude the chapter by discussing open challenges, and suggesting future research directions.
arXiv Detail & Related papers (2021-08-15T01:06:26Z) - Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study [3.4031539425106683]
Explainable AI (XAI) is the key to unlocking AI and the black-box for deep learning.
Chest CT has emerged as a valuable tool for the clinical diagnostic and treatment management of the lung diseases associated with COVID-19.
The aim of this study is to propose and develop XAI strategies for COVID-19 classification models with an investigation of comparison.
arXiv Detail & Related papers (2021-04-25T23:39:14Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.