Deep Transparent Prediction through Latent Representation Analysis
- URL: http://arxiv.org/abs/2009.07044v2
- Date: Sun, 20 Sep 2020 22:06:43 GMT
- Title: Deep Transparent Prediction through Latent Representation Analysis
- Authors: D. Kollias, N. Bouas, Y. Vlaxos, V. Brillakis, M. Seferis, I. Kollia,
L. Sukissian, J. Wingate, and S. Kollias
- Abstract summary: The paper presents a novel deep learning approach, which extracts latent information from trained Deep Neural Networks (DNNs) and derives concise representations that are analyzed in an effective, unified way for prediction purposes.
Transparency combined with high prediction accuracy are the targeted goals of the proposed approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper presents a novel deep learning approach, which extracts latent
information from trained Deep Neural Networks (DNNs) and derives concise
representations that are analyzed in an effective, unified way for prediction
purposes. It is well known that DNNs are capable of analyzing complex data;
however, they lack transparency in their decision making, in the sense that it
is not straightforward to justify their prediction, or to visualize the
features on which the decision was based. Moreover, they generally require
large amounts of data in order to learn and become able to adapt to different
environments. This makes their use difficult in healthcare, where trust and
personalization are key issues. Transparency combined with high prediction
accuracy are the targeted goals of the proposed approach. It includes both
supervised DNN training and unsupervised learning of latent variables extracted
from the trained DNNs. Domain Adaptation from multiple sources is also
presented as an extension, where the extracted latent variable representations
are used to generate predictions in other, non-annotated, environments.
Successful application is illustrated through a large experimental study in
various fields: prediction of Parkinson's disease from MRI and DaTScans;
prediction of COVID-19 and pneumonia from CT scans and X-rays; optical
character verification in retail food packaging.
Related papers
- Explainable Diagnosis Prediction through Neuro-Symbolic Integration [11.842565087408449]
We use neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction.
Our models, particularly $M_textmulti-pathway$ and $M_textcomprehensive$, demonstrate superior performance over traditional models.
These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications.
arXiv Detail & Related papers (2024-10-01T22:47:24Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Visual Interpretable and Explainable Deep Learning Models for Brain
Tumor MRI and COVID-19 Chest X-ray Images [0.0]
We evaluate attribution methods for illuminating how deep neural networks analyze medical images.
We attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.
arXiv Detail & Related papers (2022-08-01T16:05:14Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Confidence Aware Neural Networks for Skin Cancer Detection [12.300911283520719]
We present three different methods for quantifying uncertainties for skin cancer detection from images.
The obtained results reveal that the predictive uncertainty estimation methods are capable of flagging risky and erroneous predictions.
We also demonstrate that ensemble approaches are more reliable in capturing uncertainties through inference.
arXiv Detail & Related papers (2021-07-19T19:21:57Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Interpreting Uncertainty in Model Predictions For COVID-19 Diagnosis [0.0]
COVID-19 has brought in the need to use assistive tools for faster diagnosis in addition to typical lab swab testing.
Traditional convolutional networks use point estimate for predictions, lacking in capture of uncertainty.
We develop a visualization framework to address interpretability of uncertainty and its components, with uncertainty in predictions computed with a Bayesian Convolutional Neural Network.
arXiv Detail & Related papers (2020-10-26T01:27:29Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.