Multiview Representation Learning from Crowdsourced Triplet Comparisons
- URL: http://arxiv.org/abs/2302.03987v1
- Date: Wed, 8 Feb 2023 10:51:44 GMT
- Title: Multiview Representation Learning from Crowdsourced Triplet Comparisons
- Authors: Xiaotian Lu, Jiyi Li, Koh Takeuchi, Hisashi Kashima
- Abstract summary: Triplet similarity comparison is a type of crowdsourcing task.
Crowd workers are asked the question.
among three given objects, which two are more similar?
- Score: 23.652378640389756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Crowdsourcing has been used to collect data at scale in numerous fields.
Triplet similarity comparison is a type of crowdsourcing task, in which crowd
workers are asked the question ``among three given objects, which two are more
similar?'', which is relatively easy for humans to answer. However, the
comparison can be sometimes based on multiple views, i.e., different
independent attributes such as color and shape. Each view may lead to different
results for the same three objects. Although an algorithm was proposed in prior
work to produce multiview embeddings, it involves at least two problems: (1)
the existing algorithm cannot independently predict multiview embeddings for a
new sample, and (2) different people may prefer different views. In this study,
we propose an end-to-end inductive deep learning framework to solve the
multiview representation learning problem. The results show that our proposed
method can obtain multiview embeddings of any object, in which each view
corresponds to an independent attribute of the object. We collected two
datasets from a crowdsourcing platform to experimentally investigate the
performance of our proposed approach compared to conventional baseline methods.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Identifiability Results for Multimodal Contrastive Learning [72.15237484019174]
We show that it is possible to recover shared factors in a more general setup than the multi-view setting studied previously.
Our work provides a theoretical basis for multimodal representation learning and explains in which settings multimodal contrastive learning can be effective in practice.
arXiv Detail & Related papers (2023-03-16T09:14:26Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Dual Representation Learning for One-Step Clustering of Multi-View Data [30.131568561100817]
We propose a novel one-step multi-view clustering method by exploiting the dual representation of both the common and specific information of different views.
With this framework, the representation learning and clustering partition mutually benefit each other, which effectively improve the clustering performance.
arXiv Detail & Related papers (2022-08-30T14:20:26Z) - Multi-View representation learning in Multi-Task Scene [4.509968166110557]
We propose a novel semi-supervised algorithm, termed as Multi-Task Multi-View learning based on Common and Special Features (MTMVCSF)
An anti-noise multi-task multi-view algorithm called AN-MTMVCSF is proposed, which has a strong adaptability to noise labels.
The effectiveness of these algorithms is proved by a series of well-designed experiments on both real world and synthetic data.
arXiv Detail & Related papers (2022-01-15T11:26:28Z) - Error-Robust Multi-View Clustering: Progress, Challenges and
Opportunities [67.54503077766171]
Since label information is often expensive to acquire, multi-view clustering has gained growing interest.
Error-robust multi-view clustering approaches with explicit error removal formulation can be structured into five broad research categories.
This survey summarizes and reviews recent advances in error-robust clustering for multi-view data.
arXiv Detail & Related papers (2021-05-07T04:03:02Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Auto-weighted Multi-view Feature Selection with Graph Optimization [90.26124046530319]
We propose a novel unsupervised multi-view feature selection model based on graph learning.
The contributions are threefold: (1) during the feature selection procedure, the consensus similarity graph shared by different views is learned.
Experiments on various datasets demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T03:25:25Z) - Random Forest for Dissimilarity-based Multi-view Learning [8.185807285320553]
We show that the Random Forest proximity measure can be used to build the dissimilarity representations.
We then propose a Dynamic View Selection method to better combine the view-specific dissimilarity representations.
arXiv Detail & Related papers (2020-07-16T14:52:52Z) - Multi-view Low-rank Preserving Embedding: A Novel Method for Multi-view
Representation [11.91574721055601]
This paper proposes a novel multi-view learning method, named Multi-view Low-rank Preserving Embedding (MvLPE)
It integrates different views into one centroid view by minimizing the disagreement term, based on distance or similarity matrix among instances.
Experiments on six benchmark datasets demonstrate that the proposed method outperforms its counterparts.
arXiv Detail & Related papers (2020-06-14T12:47:25Z) - Generalized Multi-view Shared Subspace Learning using View Bootstrapping [43.027427742165095]
Key objective in multi-view learning is to model the information common to multiple parallel views of a class of objects/events to improve downstream learning tasks.
We present a neural method based on multi-view correlation to capture the information shared across a large number of views by subsampling them in a view-agnostic manner during training.
Experiments on spoken word recognition, 3D object classification and pose-invariant face recognition demonstrate the robustness of view bootstrapping to model a large number of views.
arXiv Detail & Related papers (2020-05-12T20:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.