When will the mist clear? On the Interpretability of Machine Learning
for Medical Applications: a survey
- URL: http://arxiv.org/abs/2010.00353v1
- Date: Thu, 1 Oct 2020 12:42:06 GMT
- Title: When will the mist clear? On the Interpretability of Machine Learning
for Medical Applications: a survey
- Authors: Antonio-Jes\'us Banegas-Luna, Jorge Pe\~na-Garc\'ia, Adrian Iftene,
Fiorella Guadagni, Patrizia Ferroni, Noemi Scarpato, Fabio Massimo Zanzotto,
Andr\'es Bueno-Crespo, Horacio P\'erez-S\'anchez
- Abstract summary: We analyse current machine learning models, frameworks, databases and other related tools as applied to medicine.
From the evidence available, ANN, LR and SVM have been observed to be the preferred models.
We discuss their interpretability, performance and the necessary input data.
- Score: 0.056212519098516295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence is providing astonishing results, with medicine being
one of its favourite playgrounds. In a few decades, computers may be capable of
formulating diagnoses and choosing the correct treatment, while robots may
perform surgical operations, and conversational agents could interact with
patients as virtual coaches. Machine Learning and, in particular, Deep Neural
Networks are behind this revolution. In this scenario, important decisions will
be controlled by standalone machines that have learned predictive models from
provided data. Among the most challenging targets of interest in medicine are
cancer diagnosis and therapies but, to start this revolution, software tools
need to be adapted to cover the new requirements. In this sense, learning tools
are becoming a commodity in Python and Matlab libraries, just to name two, but
to exploit all their possibilities, it is essential to fully understand how
models are interpreted and which models are more interpretable than others. In
this survey, we analyse current machine learning models, frameworks, databases
and other related tools as applied to medicine - specifically, to cancer
research - and we discuss their interpretability, performance and the necessary
input data. From the evidence available, ANN, LR and SVM have been observed to
be the preferred models. Besides, CNNs, supported by the rapid development of
GPUs and tensor-oriented programming libraries, are gaining in importance.
However, the interpretability of results by doctors is rarely considered which
is a factor that needs to be improved. We therefore consider this study to be a
timely contribution to the issue.
Related papers
- Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - EndToEndML: An Open-Source End-to-End Pipeline for Machine Learning Applications [0.2826977330147589]
We propose a web-based end-to-end pipeline that is capable of preprocessing, training, evaluating, and visualizing machine learning models.
Our library assists in recognizing, classifying, clustering, and predicting a wide range of multi-modal, multi-sensor datasets.
arXiv Detail & Related papers (2024-03-27T02:24:38Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Explainable AI for Bioinformatics: Methods, Tools, and Applications [1.6855835471222005]
Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models.
In this paper, we discuss the importance of explainability with a focus on bioinformatics.
arXiv Detail & Related papers (2022-12-25T21:00:36Z) - Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
Medicine [5.126042819606137]
We focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.
Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.
Federated learning enables learning large-scale models without exposing sensitive personal health information.
arXiv Detail & Related papers (2022-11-17T03:32:00Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - SIBILA: A novel interpretable ensemble of general-purpose machine
learning models applied to medical contexts [0.0]
SIBILA is an ensemble of machine learning and deep learning models.
It applies a range of interpretability algorithms to identify the most relevant input features.
It has been applied to two medical case studies to show its ability to predict in classification problems.
arXiv Detail & Related papers (2022-05-12T17:23:24Z) - Importance measures derived from random forests: characterisation and
extension [0.2741266294612776]
This thesis aims at improving the interpretability of models built by a specific family of machine learning algorithms.
Several mechanisms have been proposed to interpret these models and we aim along this thesis to improve their understanding.
arXiv Detail & Related papers (2021-06-17T13:23:57Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.