Exploring and Exploiting Uncertainty for Incomplete Multi-View
Classification
- URL: http://arxiv.org/abs/2304.05165v1
- Date: Tue, 11 Apr 2023 11:57:48 GMT
- Title: Exploring and Exploiting Uncertainty for Incomplete Multi-View
Classification
- Authors: Mengyao Xie, Zongbo Han, Changqing Zhang, Yichen Bai, Qinghua Hu
- Abstract summary: We propose an Uncertainty-induced Incomplete Multi-View Data Classification (UIMC) model to classify incomplete multi-view data.
Specifically, we model each missing data with a distribution conditioning on the available views and thus introducing uncertainty.
Our method establishes a state-of-the-art performance in terms of both performance and trustworthiness.
- Score: 47.82610025809371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classifying incomplete multi-view data is inevitable since arbitrary view
missing widely exists in real-world applications. Although great progress has
been achieved, existing incomplete multi-view methods are still difficult to
obtain a trustworthy prediction due to the relatively high uncertainty nature
of missing views. First, the missing view is of high uncertainty, and thus it
is not reasonable to provide a single deterministic imputation. Second, the
quality of the imputed data itself is of high uncertainty. To explore and
exploit the uncertainty, we propose an Uncertainty-induced Incomplete
Multi-View Data Classification (UIMC) model to classify the incomplete
multi-view data under a stable and reliable framework. We construct a
distribution and sample multiple times to characterize the uncertainty of
missing views, and adaptively utilize them according to the sampling quality.
Accordingly, the proposed method realizes more perceivable imputation and
controllable fusion. Specifically, we model each missing data with a
distribution conditioning on the available views and thus introducing
uncertainty. Then an evidence-based fusion strategy is employed to guarantee
the trustworthy integration of the imputed views. Extensive experiments are
conducted on multiple benchmark data sets and our method establishes a
state-of-the-art performance in terms of both performance and trustworthiness.
Related papers
- Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning [18.419742575630217]
This paper introduces a novel algorithm based on H"older Divergence (HD) to enhance the reliability of multi-view learning.
Through the Dempster-Shafer theory, integration of uncertainty from different modalities, thereby generating a comprehensive result.
Mathematically, HD proves to better measure the distance'' between real data distribution and predictive distribution of the model.
arXiv Detail & Related papers (2024-10-29T04:29:44Z) - Evidential Deep Partial Multi-View Classification With Discount Fusion [24.139495744683128]
We propose a novel framework called Evidential Deep Partial Multi-View Classification (EDP-MVC)
We use K-means imputation to address missing views, creating a complete set of multi-view data.
The potential conflicts and uncertainties within this imputed data can affect the reliability of downstream inferences.
arXiv Detail & Related papers (2024-08-23T14:50:49Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty Estimation for Multi-view Data: The Power of Seeing the
Whole Picture [5.868139834982011]
Uncertainty estimation is essential to make neural networks trustworthy in real-world applications.
We propose a new multi-view classification framework for better uncertainty estimation and out-of-domain sample detection.
arXiv Detail & Related papers (2022-10-06T04:47:51Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Uncertainty-Aware Multi-View Representation Learning [53.06828186507994]
We devise a novel unsupervised multi-view learning approach, termed as Dynamic Uncertainty-Aware Networks (DUA-Nets)
Guided by the uncertainty of data estimated from the generation perspective, intrinsic information from multiple views is integrated to obtain noise-free representations.
Our model achieves superior performance in extensive experiments and shows the robustness to noisy data.
arXiv Detail & Related papers (2022-01-15T07:16:20Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.