A Reliable and Interpretable Framework of Multi-view Learning for Liver
Fibrosis Staging
- URL: http://arxiv.org/abs/2306.12054v1
- Date: Wed, 21 Jun 2023 06:53:51 GMT
- Title: A Reliable and Interpretable Framework of Multi-view Learning for Liver
Fibrosis Staging
- Authors: Zheyao Gao, Yuanye Liu, Fuping Wu, NanNan Shi, Yuxin Shi, Xiahai
Zhuang
- Abstract summary: Staging of liver fibrosis is important in the diagnosis and treatment planning of patients suffering from liver diseases.
Current deep learning-based methods using abdominal magnetic resonance imaging (MRI) usually take a sub-region of the liver as an input.
We formulate this task as a multi-view learning problem and employ multiple sub-regions of the liver.
- Score: 13.491056805108183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Staging of liver fibrosis is important in the diagnosis and treatment
planning of patients suffering from liver diseases. Current deep learning-based
methods using abdominal magnetic resonance imaging (MRI) usually take a
sub-region of the liver as an input, which nevertheless could miss critical
information. To explore richer representations, we formulate this task as a
multi-view learning problem and employ multiple sub-regions of the liver.
Previously, features or predictions are usually combined in an implicit manner,
and uncertainty-aware methods have been proposed. However, these methods could
be challenged to capture cross-view representations, which can be important in
the accurate prediction of staging. Therefore, we propose a reliable multi-view
learning method with interpretable combination rules, which can model global
representations to improve the accuracy of predictions. Specifically, the
proposed method estimates uncertainties based on subjective logic to improve
reliability, and an explicit combination rule is applied based on
Dempster-Shafer's evidence theory with good power of interpretability.
Moreover, a data-efficient transformer is introduced to capture representations
in the global view. Results evaluated on enhanced MRI data show that our method
delivers superior performance over existing multi-view learning methods.
Related papers
- M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - MERIT: Multi-view Evidential learning for Reliable and Interpretable liver fibrosis sTaging [29.542924813666698]
We propose a new multi-view method based on evidential learning, referred to as MERIT.
MERIT enables uncertainty of the predictions to enhance reliability, and employs a logic-based combination rule to improve interpretability.
Results have showcased the effectiveness of the proposed MERIT, highlighting the reliability and offering both ad-hoc and post-hoc interpretability.
arXiv Detail & Related papers (2024-05-05T12:52:28Z) - Assessing Uncertainty Estimation Methods for 3D Image Segmentation under
Distribution Shifts [0.36832029288386137]
This paper explores the feasibility of using cutting-edge Bayesian and non-Bayesian methods to detect distributionally shifted samples.
We compare three distinct uncertainty estimation methods, each designed to capture either unimodal or multimodal aspects in the posterior distribution.
Our findings demonstrate that methods capable of addressing multimodal characteristics in the posterior distribution, offer more dependable uncertainty estimates.
arXiv Detail & Related papers (2024-02-10T12:23:08Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles [4.249986624493547]
Ensemble deep learning has been shown to achieve high predictive accuracy and uncertainty estimation.
perturbations in the input images at test time can still lead to significant performance degradation.
LaDiNE is a novel and robust probabilistic method that is capable of inferring informative and invariant latent variables from the input images.
arXiv Detail & Related papers (2023-10-24T15:53:07Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - SHAMSUL: Systematic Holistic Analysis to investigate Medical
Significance Utilizing Local interpretability methods in deep learning for
chest radiography pathology prediction [1.0138723409205497]
The study delves into the application of four well-established interpretability methods: Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive exPlanations (SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM) and Layer-wise Relevance Propagation (LRP)
Our analysis encompasses both single-label and multi-label predictions, providing a comprehensive and unbiased assessment through quantitative and qualitative investigations, which are compared against human expert annotation.
arXiv Detail & Related papers (2023-07-16T11:10:35Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.