Reliable Conflictive Multi-View Learning
- URL: http://arxiv.org/abs/2402.16897v2
- Date: Wed, 28 Feb 2024 09:58:46 GMT
- Title: Reliable Conflictive Multi-View Learning
- Authors: Cai Xu, Jiajun Si, Ziyu Guan, Wei Zhao, Yue Wu, Xiyue Gao
- Abstract summary: We develop an Evidential Conflictive Multi-view Learning (ECML) method for this problem.
ECML learns view-specific evidence, which could be termed as the amount of support to each category collected from data.
In the multi-view fusion stage, we propose a conflictive opinion aggregation strategy.
- Score: 17.472467781912837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view learning aims to combine multiple features to achieve more
comprehensive descriptions of data. Most previous works assume that multiple
views are strictly aligned. However, real-world multi-view data may contain
low-quality conflictive instances, which show conflictive information in
different views. Previous methods for this problem mainly focus on eliminating
the conflictive data instances by removing them or replacing conflictive views.
Nevertheless, real-world applications usually require making decisions for
conflictive instances rather than only eliminating them. To solve this, we
point out a new Reliable Conflictive Multi-view Learning (RCML) problem, which
requires the model to provide decision results and attached reliabilities for
conflictive multi-view data. We develop an Evidential Conflictive Multi-view
Learning (ECML) method for this problem. ECML first learns view-specific
evidence, which could be termed as the amount of support to each category
collected from data. Then, we can construct view-specific opinions consisting
of decision results and reliability. In the multi-view fusion stage, we propose
a conflictive opinion aggregation strategy and theoretically prove this
strategy can exactly model the relation of multi-view common and view-specific
reliabilities. Experiments performed on 6 datasets verify the effectiveness of
ECML.
Related papers
- CDIMC-net: Cognitive Deep Incomplete Multi-view Clustering Network [53.72046586512026]
We propose a novel incomplete multi-view clustering network, called Cognitive Deep Incomplete Multi-view Clustering Network (CDIMC-net)
It captures the high-level features and local structure of each view by incorporating the view-specific deep encoders and graph embedding strategy into a framework.
Based on the human cognition, i.e., learning from easy to hard, it introduces a self-paced strategy to select the most confident samples for model training.
arXiv Detail & Related papers (2024-03-28T15:45:03Z) - Hierarchical Mutual Information Analysis: Towards Multi-view Clustering
in The Wild [9.380271109354474]
This work proposes a deep MVC framework where data recovery and alignment are fused in a hierarchically consistent way to maximize the mutual information among different views.
To the best of our knowledge, this could be the first successful attempt to handle the missing and unaligned data problem separately with different learning paradigms.
arXiv Detail & Related papers (2023-10-28T06:43:57Z) - A Novel Approach for Effective Multi-View Clustering with
Information-Theoretic Perspective [24.630259061774836]
This study presents a new approach called Sufficient Multi-View Clustering (SUMVC) that examines the multi-view clustering framework from an information-theoretic standpoint.
Firstly, we develop a simple and reliable multi-view clustering method SCMVC that employs variational analysis to generate consistent information.
Secondly, we propose a sufficient representation lower bound to enhance consistent information and minimise unnecessary information among views.
arXiv Detail & Related papers (2023-09-25T09:41:11Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Deep Incomplete Multi-view Clustering with Cross-view Partial Sample and
Prototype Alignment [50.82982601256481]
We propose a Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incomplete Multi-view Clustering.
Unlike existing contrastive-based methods, we adopt pair-observed data alignment as 'proxy supervised signals' to guide instance-to-instance correspondence construction.
arXiv Detail & Related papers (2023-03-28T02:31:57Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - TSK Fuzzy System Towards Few Labeled Incomplete Multi-View Data
Classification [24.01191516774655]
A transductive semi-supervised incomplete multi-view TSK fuzzy system modeling method (SSIMV_TSK) is proposed to address these challenges.
The proposed method integrates missing view imputation, pseudo label learning of unlabeled data, and fuzzy system modeling into a single process to yield a model with interpretable fuzzy rules.
Experimental results on real datasets show that the proposed method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-08T11:41:06Z) - Locality Relationship Constrained Multi-view Clustering Framework [5.586948325488168]
Locality Relationship Constrained Multi-view Clustering Framework (LRC-MCF) is presented.
It aims to explore the diversity, geometric, consensus and complementary information among different views.
LRC-MCF takes sufficient consideration to weights of different views in finding the common-view locality structure.
arXiv Detail & Related papers (2021-07-11T15:45:10Z) - Error-Robust Multi-View Clustering: Progress, Challenges and
Opportunities [67.54503077766171]
Since label information is often expensive to acquire, multi-view clustering has gained growing interest.
Error-robust multi-view clustering approaches with explicit error removal formulation can be structured into five broad research categories.
This survey summarizes and reviews recent advances in error-robust clustering for multi-view data.
arXiv Detail & Related papers (2021-05-07T04:03:02Z) - Multi-view Low-rank Preserving Embedding: A Novel Method for Multi-view
Representation [11.91574721055601]
This paper proposes a novel multi-view learning method, named Multi-view Low-rank Preserving Embedding (MvLPE)
It integrates different views into one centroid view by minimizing the disagreement term, based on distance or similarity matrix among instances.
Experiments on six benchmark datasets demonstrate that the proposed method outperforms its counterparts.
arXiv Detail & Related papers (2020-06-14T12:47:25Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.