Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence
- URL: http://arxiv.org/abs/2007.13018v1
- Date: Sat, 25 Jul 2020 21:59:17 GMT
- Title: Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence
- Authors: Aaqib Saeed, Flora D. Salim, Tanir Ozcelebi, and Johan Lukkien
- Abstract summary: Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
- Score: 8.110949636804772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth
of data that cannot be accumulated in a centralized repository for learning
supervised models due to privacy, bandwidth limitations, and the prohibitive
cost of annotations. Federated learning provides a compelling framework for
learning models from decentralized data, but conventionally, it assumes the
availability of labeled samples, whereas on-device data are generally either
unlabeled or cannot be annotated readily through user interaction. To address
these issues, we propose a self-supervised approach termed
\textit{scalogram-signal correspondence learning} based on wavelet transform to
learn useful representations from unlabeled sensor inputs, such as
electroencephalography, blood volume pulse, accelerometer, and WiFi channel
state information. Our auxiliary task requires a deep temporal neural network
to determine if a given pair of a signal and its complementary viewpoint (i.e.,
a scalogram generated with a wavelet transform) align with each other or not
through optimizing a contrastive objective. We extensively assess the quality
of learned features with our multi-view strategy on diverse public datasets,
achieving strong performance in all domains. We demonstrate the effectiveness
of representations learned from an unlabeled input collection on downstream
tasks with training a linear classifier over pretrained network, usefulness in
low-data regime, transfer learning, and cross-validation. Our methodology
achieves competitive performance with fully-supervised networks, and it
outperforms pre-training with autoencoders in both central and federated
contexts. Notably, it improves the generalization in a semi-supervised setting
as it reduces the volume of labeled data required through leveraging
self-supervised learning.
Related papers
- Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model [32.094290282897894]
Federated learning aims to train a collective model from physically isolated clients while safeguarding the privacy of users' data.
We propose a novel lightweight unsupervised federated learning approach that leverages unlabeled data on each client to perform lightweight model training and communication.
Our proposed method greatly enhances model performance in comparison to CLIP's zero-shot predictions and even outperforms supervised federated learning benchmark methods.
arXiv Detail & Related papers (2024-04-17T03:42:48Z) - Deep Feature Learning for Wireless Spectrum Data [0.5809784853115825]
We propose an approach to learning feature representations for wireless transmission clustering in a completely unsupervised manner.
We show that the automatic representation learning is able to extract fine-grained clusters containing the shapes of the wireless transmission bursts.
arXiv Detail & Related papers (2023-08-07T12:27:19Z) - Self-supervised On-device Federated Learning from Unlabeled Streams [15.94978097767473]
We propose a Self-supervised On-device Federated learning framework with coreset selection, which we call SOFed, to automatically select a coreset.
Experiments demonstrate the effectiveness and significance of the proposed method in visual representation learning.
arXiv Detail & Related papers (2022-12-02T07:22:00Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.