What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks?
- URL: http://arxiv.org/abs/2202.05998v1
- Date: Sat, 12 Feb 2022 06:10:15 GMT
- Title: What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks?
- Authors: Hangwei Qian, Tian Tian, Chunyan Miao
- Abstract summary: We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
- Score: 59.51457877578138
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised learning establishes a new paradigm of learning
representations with much fewer or even no label annotations. Recently there
has been remarkable progress on large-scale contrastive learning models which
require substantial computing resources, yet such models are not practically
optimal for small-scale tasks. To fill the gap, we aim to study contrastive
learning on the wearable-based activity recognition task. Specifically, we
conduct an in-depth study of contrastive learning from both algorithmic-level
and task-level perspectives. For algorithmic-level analysis, we decompose
contrastive models into several key components and conduct rigorous
experimental evaluations to better understand the efficacy and rationale behind
contrastive learning. More importantly, for task-level analysis, we show that
the wearable-based signals bring unique challenges and opportunities to
existing contrastive models, which cannot be readily solved by existing
algorithms. Our thorough empirical studies suggest important practices and shed
light on future research challenges. In the meantime, this paper presents an
open-source PyTorch library \texttt{CL-HAR}, which can serve as a practical
tool for researchers. The library is highly modularized and easy to use, which
opens up avenues for exploring novel contrastive models quickly in the future.
Related papers
- Heterogeneous Contrastive Learning for Foundation Models and Beyond [73.74745053250619]
In the era of big data and Artificial Intelligence, an emerging paradigm is to utilize contrastive self-supervised learning to model large-scale heterogeneous data.
This survey critically evaluates the current landscape of heterogeneous contrastive learning for foundation models.
arXiv Detail & Related papers (2024-03-30T02:55:49Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Exploring Task Difficulty for Few-Shot Relation Extraction [22.585574542329677]
Few-shot relation extraction (FSRE) focuses on recognizing novel relations by learning with merely a handful of annotated instances.
We introduce a novel approach based on contrastive learning that learns better representations by exploiting relation label information.
arXiv Detail & Related papers (2021-09-12T09:40:33Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Unsupervised Learning for Robust Fitting:A Reinforcement Learning
Approach [25.851792661168698]
We introduce a novel framework that learns to solve robust model fitting.
Unlike other methods, our work is agnostic to the underlying input features.
We empirically show that our method outperforms existing learning approaches.
arXiv Detail & Related papers (2021-03-05T07:14:00Z) - Learning Purified Feature Representations from Task-irrelevant Labels [18.967445416679624]
We propose a novel learning framework called PurifiedLearning to exploit task-irrelevant features extracted from task-irrelevant labels.
Our work is built on solid theoretical analysis and extensive experiments, which demonstrate the effectiveness of PurifiedLearning.
arXiv Detail & Related papers (2021-02-22T12:50:49Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.