CroSSL: Cross-modal Self-Supervised Learning for Time-series through
Latent Masking
- URL: http://arxiv.org/abs/2307.16847v3
- Date: Mon, 19 Feb 2024 11:59:59 GMT
- Title: CroSSL: Cross-modal Self-Supervised Learning for Time-series through
Latent Masking
- Authors: Shohreh Deldari, Dimitris Spathis, Mohammad Malekzadeh, Fahim Kawsar,
Flora Salim, Akhil Mathur
- Abstract summary: CroSSL allows for handling missing modalities and end-to-end cross-modal learning.
We evaluate our method on a wide range of data, including motion sensors.
- Score: 11.616031590118014
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Limited availability of labeled data for machine learning on multimodal
time-series extensively hampers progress in the field. Self-supervised learning
(SSL) is a promising approach to learning data representations without relying
on labels. However, existing SSL methods require expensive computations of
negative pairs and are typically designed for single modalities, which limits
their versatility. We introduce CroSSL (Cross-modal SSL), which puts forward
two novel concepts: masking intermediate embeddings produced by
modality-specific encoders, and their aggregation into a global embedding
through a cross-modal aggregator that can be fed to down-stream classifiers.
CroSSL allows for handling missing modalities and end-to-end cross-modal
learning without requiring prior data preprocessing for handling missing inputs
or negative-pair sampling for contrastive learning. We evaluate our method on a
wide range of data, including motion sensors such as accelerometers or
gyroscopes and biosignals (heart rate, electroencephalograms, electromyograms,
electrooculograms, and electrodermal) to investigate the impact of masking
ratios and masking strategies for various data types and the robustness of the
learned representations to missing data. Overall, CroSSL outperforms previous
SSL and supervised benchmarks using minimal labeled data, and also sheds light
on how latent masking can improve cross-modal learning. Our code is
open-sourced at https://github.com/dr-bell/CroSSL.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Self-supervised TransUNet for Ultrasound regional segmentation of the
distal radius in children [0.6291443816903801]
Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
This paper investigates the feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
arXiv Detail & Related papers (2023-09-18T05:23:33Z) - A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends [82.64268080902742]
Self-supervised learning (SSL) aims to learn discriminative features from unlabeled data without relying on human-annotated labels.
SSL has garnered significant attention recently, leading to the development of numerous related algorithms.
This paper presents a review of diverse SSL methods, encompassing algorithmic aspects, application domains, three key trends, and open research questions.
arXiv Detail & Related papers (2023-01-13T14:41:05Z) - Benchmark for Uncertainty & Robustness in Self-Supervised Learning [0.0]
Self-Supervised Learning is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars.
In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks.
Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning.
arXiv Detail & Related papers (2022-12-23T15:46:23Z) - Self-Supervised PPG Representation Learning Shows High Inter-Subject
Variability [3.8036939971290007]
We propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation.
Results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial.
SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
arXiv Detail & Related papers (2022-12-07T19:02:45Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Semi-supervised Medical Image Classification with Global Latent Mixing [8.330337646455957]
Computer-aided diagnosis via deep learning relies on large-scale annotated data sets.
Semi-supervised learning mitigates this challenge by leveraging unlabeled data.
We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data.
arXiv Detail & Related papers (2020-05-22T14:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.