Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions
- URL: http://arxiv.org/abs/2303.09790v4
- Date: Tue, 29 Aug 2023 14:16:04 GMT
- Title: Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions
- Authors: Ke Zou and Tian Lin and Xuedong Yuan and Haoyu Chen and Xiaojing Shen
and Meng Wang and Huazhu Fu
- Abstract summary: We introduce a novel multimodality evidential fusion pipeline for eye disease screening, EyeMoSt.
Our model estimates both local uncertainty for unimodality and global uncertainty for the fusion modality to produce reliable classification results.
Our experimental findings on both public and in-house datasets show that our model is more reliable than current methods.
- Score: 49.4545260500952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodality eye disease screening is crucial in ophthalmology as it
integrates information from diverse sources to complement their respective
performances. However, the existing methods are weak in assessing the
reliability of each unimodality, and directly fusing an unreliable modality may
cause screening errors. To address this issue, we introduce a novel
multimodality evidential fusion pipeline for eye disease screening, EyeMoSt,
which provides a measure of confidence for unimodality and elegantly integrates
the multimodality information from a multi-distribution fusion perspective.
Specifically, our model estimates both local uncertainty for unimodality and
global uncertainty for the fusion modality to produce reliable classification
results. More importantly, the proposed mixture of Student's $t$ distributions
adaptively integrates different modalities to endow the model with heavy-tailed
properties, increasing robustness and reliability. Our experimental findings on
both public and in-house datasets show that our model is more reliable than
current methods. Additionally, EyeMost has the potential ability to serve as a
data quality discriminator, enabling reliable decision-making for multimodality
eye disease screening.
Related papers
- Latent Distribution Decoupling: A Probabilistic Framework for Uncertainty-Aware Multimodal Emotion Recognition [7.25361375272096]
Multimodal multi-label emotion recognition aims to identify the concurrent presence of multiple emotions in multimodal data.
Existing studies overlook the impact of textbfaleatoric uncertainty, which is the inherent noise in the multimodal data.
This paper proposes Latent emotional Distribution Decomposition with Uncertainty perception framework.
arXiv Detail & Related papers (2025-02-19T18:53:23Z) - EsurvFusion: An evidential multimodal survival fusion model based on Gaussian random fuzzy numbers [13.518282190712348]
EsurvFusion is designed to combine multimodal data at the decision level.
It estimates modality-level reliability through a reliability discounting layer.
This is the first work that studies multimodal survival analysis with both uncertainty and reliability.
arXiv Detail & Related papers (2024-12-02T07:35:29Z) - M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - Multimodal Fusion on Low-quality Data: A Comprehensive Survey [110.22752954128738]
This paper surveys the common challenges and recent advances of multimodal fusion in the wild.
We identify four main challenges that are faced by multimodal fusion on low-quality data.
This new taxonomy will enable researchers to understand the state of the field and identify several potential directions.
arXiv Detail & Related papers (2024-04-27T07:22:28Z) - Provable Dynamic Fusion for Low-Quality Multimodal Data [94.39538027450948]
Dynamic multimodal fusion emerges as a promising learning paradigm.
Despite its widespread use, theoretical justifications in this field are still notably lacking.
This paper provides theoretical understandings to answer this question under a most popular multimodal fusion framework from the generalization perspective.
A novel multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed, which can improve the performance in terms of classification accuracy and model robustness.
arXiv Detail & Related papers (2023-06-03T08:32:35Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.