Trusted Multi-View Classification with Dynamic Evidential Fusion
- URL: http://arxiv.org/abs/2204.11423v2
- Date: Wed, 27 Apr 2022 14:03:00 GMT
- Title: Trusted Multi-View Classification with Dynamic Evidential Fusion
- Authors: Zongbo Han, Changqing Zhang, Huazhu Fu, and Joey Tianyi Zhou
- Abstract summary: We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
- Score: 73.35990456162745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing multi-view classification algorithms focus on promoting accuracy by
exploiting different views, typically integrating them into common
representations for follow-up tasks. Although effective, it is also crucial to
ensure the reliability of both the multi-view integration and the final
decision, especially for noisy, corrupted and out-of-distribution data.
Dynamically assessing the trustworthiness of each view for different samples
could provide reliable integration. This can be achieved through uncertainty
estimation. With this in mind, we propose a novel multi-view classification
algorithm, termed trusted multi-view classification (TMC), providing a new
paradigm for multi-view learning by dynamically integrating different views at
an evidence level. The proposed TMC can promote classification reliability by
considering evidence from each view. Specifically, we introduce the variational
Dirichlet to characterize the distribution of the class probabilities,
parameterized with evidence from different views and integrated with the
Dempster-Shafer theory. The unified learning framework induces accurate
uncertainty and accordingly endows the model with both reliability and
robustness against possible noise or corruption. Both theoretical and
experimental results validate the effectiveness of the proposed model in
accuracy, robustness and trustworthiness.
Related papers
- Uncertainty-Weighted Mutual Distillation for Multi-View Fusion [0.053801353100098995]
We propose a novel Multi-View Uncertainty-Weighted Mutual Distillation (MV-UWMD) method.
MV-UWMD improves prediction accuracy and consistency compared to existing multi-view learning approaches.
arXiv Detail & Related papers (2024-11-15T09:45:32Z) - Dynamic Evidence Decoupling for Trusted Multi-view Learning [17.029245880233816]
We propose a Consistent and Complementary-aware trusted Multi-view Learning (CCML) method to solve this problem.
We first construct view opinions using evidential deep neural networks, which consist of belief mass vectors and uncertainty estimates.
The results validate the effectiveness of the dynamic evidence decoupling strategy and show that CCML significantly outperforms baselines on accuracy and reliability.
arXiv Detail & Related papers (2024-10-04T03:27:51Z) - Navigating Conflicting Views: Harnessing Trust for Learning [5.4486293124577125]
We develop a computational trust-based discounting method to enhance the existing trustworthy framework.
We evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called Multi-View Agreement with Ground Truth.
arXiv Detail & Related papers (2024-06-03T03:22:18Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - ELFNet: Evidential Local-global Fusion for Stereo Matching [17.675146012208124]
We introduce the textbfEvidential textbfLocal-global textbfFusion (ELF) framework for stereo matching.
It endows both uncertainty estimation and confidence-aware fusion with trustworthy heads.
arXiv Detail & Related papers (2023-08-01T15:51:04Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Variational Inference for Deep Probabilistic Canonical Correlation
Analysis [49.36636239154184]
We propose a deep probabilistic multi-view model that is composed of a linear multi-view layer and deep generative networks as observation models.
An efficient variational inference procedure is developed that approximates the posterior distributions of the latent probabilistic multi-view layer.
A generalization to models with arbitrary number of views is also proposed.
arXiv Detail & Related papers (2020-03-09T17:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.