Contrastive Neural Processes for Self-Supervised Learning
- URL: http://arxiv.org/abs/2110.13623v1
- Date: Sun, 24 Oct 2021 21:01:27 GMT
- Title: Contrastive Neural Processes for Self-Supervised Learning
- Authors: Konstantinos Kallidromitis, Denis Gudovskiy, Kozuka Kazuki, Ohama Iku,
Luca Rigazio
- Abstract summary: We propose a novel self-supervised learning framework that combines contrastive learning with neural processes.
It relies on recent advances in neural processes to perform time series forecasting.
Unlike previous self-supervised methods, our augmentation pipeline is task-agnostic, enabling our method to perform well across various applications.
- Score: 1.8059331230167266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent contrastive methods show significant improvement in self-supervised
learning in several domains. In particular, contrastive methods are most
effective where data augmentation can be easily constructed e.g. in computer
vision. However, they are less successful in domains without established data
transformations such as time series data. In this paper, we propose a novel
self-supervised learning framework that combines contrastive learning with
neural processes. It relies on recent advances in neural processes to perform
time series forecasting. This allows to generate augmented versions of data by
employing a set of various sampling functions and, hence, avoid manually
designed augmentations. We extend conventional neural processes and propose a
new contrastive loss to learn times series representations in a self-supervised
setup. Therefore, unlike previous self-supervised methods, our augmentation
pipeline is task-agnostic, enabling our method to perform well across various
applications. In particular, a ResNet with a linear classifier trained using
our approach is able to outperform state-of-the-art techniques across
industrial, medical and audio datasets improving accuracy over 10% in ECG
periodic data. We further demonstrate that our self-supervised representations
are more efficient in the latent space, improving multiple clustering indexes
and that fine-tuning our method on 10% of labels achieves results competitive
to fully-supervised learning.
Related papers
- Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - Finding Order in Chaos: A Novel Data Augmentation Method for Time Series
in Contrastive Learning [26.053496478247236]
We propose a novel data augmentation method for quasi-periodic time-series tasks.
Our method builds upon the well-known mixup technique by incorporating a novel approach.
We evaluate our proposed method on three time-series tasks, including heart rate estimation, human activity recognition, and cardiovascular disease detection.
arXiv Detail & Related papers (2023-09-23T17:42:13Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - LEAVES: Learning Views for Time-Series Data in Contrastive Learning [16.84326709739788]
We propose a module for automating view generation for time-series data in contrastive learning, named learning views for time-series data (LEAVES)
The proposed method is more effective in finding reasonable views and performs downstream tasks better than the baselines.
arXiv Detail & Related papers (2022-10-13T20:18:22Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Multi-Pretext Attention Network for Few-shot Learning with
Self-supervision [37.6064643502453]
We propose a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample.
Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC.
We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.
arXiv Detail & Related papers (2021-03-10T10:48:37Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.