Sense and Learn: Self-Supervision for Omnipresent Sensors
- URL: http://arxiv.org/abs/2009.13233v2
- Date: Mon, 6 Sep 2021 14:21:38 GMT
- Title: Sense and Learn: Self-Supervision for Omnipresent Sensors
- Authors: Aaqib Saeed, Victor Ungureanu, Beat Gfeller
- Abstract summary: We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
- Score: 9.442811508809994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning general-purpose representations from multisensor data produced by
the omnipresent sensing systems (or IoT in general) has numerous applications
in diverse use cases. Existing purely supervised end-to-end deep learning
techniques depend on the availability of a massive amount of well-curated data,
acquiring which is notoriously difficult but required to achieve a sufficient
level of generalization on a task of interest. In this work, we leverage the
self-supervised learning paradigm towards realizing the vision of continual
learning from unlabeled inputs. We present a generalized framework named Sense
and Learn for representation or feature learning from raw sensory data. It
consists of several auxiliary tasks that can learn high-level and broadly
useful features entirely from unannotated data without any human involvement in
the tedious labeling process. We demonstrate the efficacy of our approach on
several publicly available datasets from different domains and in various
settings, including linear separability, semi-supervised or few shot learning,
and transfer learning. Our methodology achieves results that are competitive
with the supervised approaches and close the gap through fine-tuning a network
while learning the downstream tasks in most cases. In particular, we show that
the self-supervised network can be utilized as initialization to significantly
boost the performance in a low-data regime with as few as 5 labeled instances
per class, which is of high practical importance to real-world problems.
Likewise, the learned representations with self-supervision are found to be
highly transferable between related datasets, even when few labeled instances
are available from the target domains. The self-learning nature of our
methodology opens up exciting possibilities for on-device continual learning.
Related papers
- Cross-Domain HAR: Few Shot Transfer Learning for Human Activity
Recognition [0.2944538605197902]
We present an approach for economic use of publicly available labeled HAR datasets for effective transfer learning.
We introduce a novel transfer learning framework, Cross-Domain HAR, which follows the teacher-student self-training paradigm.
We demonstrate the effectiveness of our approach for practically relevant few shot activity recognition scenarios.
arXiv Detail & Related papers (2023-10-22T19:13:25Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z) - Joint Supervised and Self-Supervised Learning for 3D Real-World
Challenges [16.328866317851187]
Point cloud processing and 3D shape understanding are challenging tasks for which deep learning techniques have demonstrated great potentials.
Here we consider several possible scenarios involving synthetic and real-world point clouds where supervised learning fails due to data scarcity and large domain gaps.
We propose to enrich standard feature representations by leveraging self-supervision through a multi-task model that can solve a 3D puzzle while learning the main task of shape classification or part segmentation.
arXiv Detail & Related papers (2020-04-15T23:34:03Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.