Contrastive Meta-Learning for Partially Observable Few-Shot Learning
- URL: http://arxiv.org/abs/2301.13136v1
- Date: Mon, 30 Jan 2023 18:17:24 GMT
- Title: Contrastive Meta-Learning for Partially Observable Few-Shot Learning
- Authors: Adam Jelley, Amos Storkey, Antreas Antoniou, Sam Devlin
- Abstract summary: We consider the problem of learning a unified representation from partial observations, where useful features may be present in only some of the views.
We approach this through a probabilistic formalism enabling views to map to representations with different levels of uncertainty in different components.
Our approach, Partial Observation Experts Modelling (POEM), then enables us to meta-learn consistent representations from partial observations.
- Score: 5.363168481735953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many contrastive and meta-learning approaches learn representations by
identifying common features in multiple views. However, the formalism for these
approaches generally assumes features to be shared across views to be captured
coherently. We consider the problem of learning a unified representation from
partial observations, where useful features may be present in only some of the
views. We approach this through a probabilistic formalism enabling views to map
to representations with different levels of uncertainty in different
components; these views can then be integrated with one another through
marginalisation over that uncertainty. Our approach, Partial Observation
Experts Modelling (POEM), then enables us to meta-learn consistent
representations from partial observations. We evaluate our approach on an
adaptation of a comprehensive few-shot learning benchmark, Meta-Dataset, and
demonstrate the benefits of POEM over other meta-learning methods at
representation learning from partial observations. We further demonstrate the
utility of POEM by meta-learning to represent an environment from partial views
observed by an agent exploring the environment.
Related papers
- Meta-PerSER: Few-Shot Listener Personalized Speech Emotion Recognition via Meta-learning [45.925209699021124]
This paper introduces Meta-PerSER, a novel meta-learning framework that personalizes Speech Emotion Recognition (SER)<n>By integrating robust representations from pre-trained self-supervised models, our framework first captures general emotional cues and then fine-tunes itself to personal annotation styles.<n>Experiments on the IEMOCAP corpus demonstrate that Meta-PerSER significantly outperforms baseline methods in both seen and unseen data scenarios.
arXiv Detail & Related papers (2025-05-22T04:44:20Z) - Balanced Multi-view Clustering [56.17836963920012]
Multi-view clustering (MvC) aims to integrate information from different views to enhance the capability of the model in capturing the underlying data structures.
The widely used joint training paradigm in MvC is potentially not fully leverage the multi-view information.
We propose a novel balanced multi-view clustering (BMvC) method, which introduces a view-specific contrastive regularization (VCR) to modulate the optimization of each view.
arXiv Detail & Related papers (2025-01-05T14:42:47Z) - Multi-View Causal Representation Learning with Partial Observability [36.37049791756438]
We present a unified framework for studying identifiability of representations learned from simultaneously observed views.
We prove that the information shared across all subsets of any number of views can be learned up to a smooth bijection using contrastive learning.
We experimentally validate our claims on numerical, image, and multi-modal data sets.
arXiv Detail & Related papers (2023-11-07T15:07:08Z) - MetaViewer: Towards A Unified Multi-View Representation [29.71883878740635]
We propose a novel bi-level-optimization-based multi-view learning framework.
Specifically, we train a meta-learner, namely MetaViewer, to learn fusion and model the view-shared meta representation.
arXiv Detail & Related papers (2023-03-11T07:17:28Z) - Semantically Consistent Multi-view Representation Learning [11.145085584637744]
We propose a novel Semantically Consistent Multi-view Representation Learning (SCMRL)
SCMRL excavates underlying multi-view semantic consensus information and utilize the information to guide the unified feature representation learning.
Compared with several state-of-the-art algorithms, extensive experiments demonstrate its superiority.
arXiv Detail & Related papers (2023-03-08T04:27:46Z) - Unifying Vision-Language Representation Space with Single-tower
Transformer [29.604520441315135]
We train a model to learn a unified vision-language representation space that encodes both modalities at once in a modality-agnostic manner.
We discover intriguing properties that distinguish OneR from the previous works that learn modality-specific representation spaces.
arXiv Detail & Related papers (2022-11-21T02:34:21Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - A Variational Information Bottleneck Approach to Multi-Omics Data
Integration [98.6475134630792]
We propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.
Our method applies the IB framework on marginal and joint representations of the observed views to focus on intra-view and inter-view interactions that are relevant for the target.
Experiments on real-world datasets show that our method consistently achieves gain from data integration and outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-02-05T06:05:39Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.