Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions
- URL: http://arxiv.org/abs/2303.09790v4
- Date: Tue, 29 Aug 2023 14:16:04 GMT
- Title: Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions
- Authors: Ke Zou and Tian Lin and Xuedong Yuan and Haoyu Chen and Xiaojing Shen
and Meng Wang and Huazhu Fu
- Abstract summary: We introduce a novel multimodality evidential fusion pipeline for eye disease screening, EyeMoSt.
Our model estimates both local uncertainty for unimodality and global uncertainty for the fusion modality to produce reliable classification results.
Our experimental findings on both public and in-house datasets show that our model is more reliable than current methods.
- Score: 49.4545260500952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodality eye disease screening is crucial in ophthalmology as it
integrates information from diverse sources to complement their respective
performances. However, the existing methods are weak in assessing the
reliability of each unimodality, and directly fusing an unreliable modality may
cause screening errors. To address this issue, we introduce a novel
multimodality evidential fusion pipeline for eye disease screening, EyeMoSt,
which provides a measure of confidence for unimodality and elegantly integrates
the multimodality information from a multi-distribution fusion perspective.
Specifically, our model estimates both local uncertainty for unimodality and
global uncertainty for the fusion modality to produce reliable classification
results. More importantly, the proposed mixture of Student's $t$ distributions
adaptively integrates different modalities to endow the model with heavy-tailed
properties, increasing robustness and reliability. Our experimental findings on
both public and in-house datasets show that our model is more reliable than
current methods. Additionally, EyeMost has the potential ability to serve as a
data quality discriminator, enabling reliable decision-making for multimodality
eye disease screening.
Related papers
- M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - Trustworthy Contrast-enhanced Brain MRI Synthesis [27.43375565176473]
Multi-modality medical image translation aims to synthesize CE-MRI images from other modalities.
We introduce TrustI2I, a novel trustworthy method that reformulates multi-to-one medical image translation problem as a multimodal regression problem.
arXiv Detail & Related papers (2024-07-10T05:17:01Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - Multimodal Fusion on Low-quality Data: A Comprehensive Survey [110.22752954128738]
This paper surveys the common challenges and recent advances of multimodal fusion in the wild.
We identify four main challenges that are faced by multimodal fusion on low-quality data.
This new taxonomy will enable researchers to understand the state of the field and identify several potential directions.
arXiv Detail & Related papers (2024-04-27T07:22:28Z) - Assessing Uncertainty Estimation Methods for 3D Image Segmentation under
Distribution Shifts [0.36832029288386137]
This paper explores the feasibility of using cutting-edge Bayesian and non-Bayesian methods to detect distributionally shifted samples.
We compare three distinct uncertainty estimation methods, each designed to capture either unimodal or multimodal aspects in the posterior distribution.
Our findings demonstrate that methods capable of addressing multimodal characteristics in the posterior distribution, offer more dependable uncertainty estimates.
arXiv Detail & Related papers (2024-02-10T12:23:08Z) - Provable Dynamic Fusion for Low-Quality Multimodal Data [94.39538027450948]
Dynamic multimodal fusion emerges as a promising learning paradigm.
Despite its widespread use, theoretical justifications in this field are still notably lacking.
This paper provides theoretical understandings to answer this question under a most popular multimodal fusion framework from the generalization perspective.
A novel multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed, which can improve the performance in terms of classification accuracy and model robustness.
arXiv Detail & Related papers (2023-06-03T08:32:35Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.