Accelerating Self-Supervised Learning via Efficient Training Strategies
- URL: http://arxiv.org/abs/2212.05611v1
- Date: Sun, 11 Dec 2022 21:49:39 GMT
- Title: Accelerating Self-Supervised Learning via Efficient Training Strategies
- Authors: Mustafa Taha Ko\c{c}yi\u{g}it, Timothy M. Hospedales, Hakan Bilen
- Abstract summary: Time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts.
Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods.
- Score: 98.26556609110992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently the focus of the computer vision community has shifted from
expensive supervised learning towards self-supervised learning of visual
representations. While the performance gap between supervised and
self-supervised has been narrowing, the time for training self-supervised deep
networks remains an order of magnitude larger than its supervised counterparts,
which hinders progress, imposes carbon cost, and limits societal benefits to
institutions with substantial resources. Motivated by these issues, this paper
investigates reducing the training time of recent self-supervised methods by
various model-agnostic strategies that have not been used for this problem. In
particular, we study three strategies: an extendable cyclic learning rate
schedule, a matching progressive augmentation magnitude and image resolutions
schedule, and a hard positive mining strategy based on augmentation difficulty.
We show that all three methods combined lead up to 2.7 times speed-up in the
training time of several self-supervised methods while retaining comparable
performance to the standard self-supervised learning setting.
Related papers
- Unsupervised Temporal Action Localization via Self-paced Incremental
Learning [57.55765505856969]
We present a novel self-paced incremental learning model to enhance clustering and localization training simultaneously.
We design two (constant- and variable- speed) incremental instance learning strategies for easy-to-hard model training, thus ensuring the reliability of these video pseudolabels.
arXiv Detail & Related papers (2023-12-12T16:00:55Z) - Remote Heart Rate Monitoring in Smart Environments from Videos with
Self-supervised Pre-training [28.404118669462772]
We introduce a solution that utilizes self-supervised contrastive learning for the estimation of remote photoplethys (mography) and heart rate monitoring.
We propose the use of 3 spatial and 3 temporal augmentations for training an encoder through a contrastive framework, followed by utilizing the late-intermediate embeddings of the encoder for remote PPG and heart rate estimation.
arXiv Detail & Related papers (2023-10-23T22:41:04Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Domain-Aware Augmentations for Unsupervised Online General Continual
Learning [7.145581090959242]
This paper proposes a novel approach that enhances memory usage for contrastive learning in Unsupervised Online General Continual Learning (UOGCL)
Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups.
Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.
arXiv Detail & Related papers (2023-09-13T11:45:21Z) - Curriculum Learning in Job Shop Scheduling using Reinforcement Learning [0.3867363075280544]
Deep Reinforcement Learning (DRL) dynamically adjusts an agent's planning strategy in response to difficult instances.
We further improve DLR as an underlying method by actively incorporating the variability of difficulty within the same problem size into the design of the learning process.
arXiv Detail & Related papers (2023-05-17T13:15:27Z) - Persistent Reinforcement Learning via Subgoal Curricula [114.83989499740193]
Value-accelerated Persistent Reinforcement Learning (VaPRL) generates a curriculum of initial states.
VaPRL reduces the interventions required by three orders of magnitude compared to episodic reinforcement learning.
arXiv Detail & Related papers (2021-07-27T16:39:45Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.