Generic Semi-Supervised Adversarial Subject Translation for Sensor-Based
Human Activity Recognition
- URL: http://arxiv.org/abs/2012.03682v1
- Date: Wed, 11 Nov 2020 12:16:23 GMT
- Title: Generic Semi-Supervised Adversarial Subject Translation for Sensor-Based
Human Activity Recognition
- Authors: Elnaz Soleimani, Ghazaleh Khodabandelou, Abdelghani Chibani, Yacine
Amirat
- Abstract summary: This paper presents a novel generic and robust approach for semi-supervised domain adaptation in Human Activity Recognition.
It capitalizes on the advantages of the adversarial framework to tackle the shortcomings, by leveraging knowledge from annotated samples exclusively from the source subject and unlabeled ones of the target subject.
The results demonstrate the effectiveness of our proposed algorithms over state-of-the-art methods, which led in up to 13%, 4%, and 13% improvement of our high-level activities recognition metrics for Opportunity, LISSI, and PAMAP2 datasets.
- Score: 6.2997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of Human Activity Recognition (HAR) models, particularly deep
neural networks, is highly contingent upon the availability of the massive
amount of annotated training data which should be sufficiently labeled. Though,
data acquisition and manual annotation in the HAR domain are prohibitively
expensive due to skilled human resource requirements in both steps. Hence,
domain adaptation techniques have been proposed to adapt the knowledge from the
existing source of data. More recently, adversarial transfer learning methods
have shown very promising results in image classification, yet limited for
sensor-based HAR problems, which are still prone to the unfavorable effects of
the imbalanced distribution of samples. This paper presents a novel generic and
robust approach for semi-supervised domain adaptation in HAR, which capitalizes
on the advantages of the adversarial framework to tackle the shortcomings, by
leveraging knowledge from annotated samples exclusively from the source subject
and unlabeled ones of the target subject. Extensive subject translation
experiments are conducted on three large, middle, and small-size datasets with
different levels of imbalance to assess the robustness and effectiveness of the
proposed model to the scale as well as imbalance in the data. The results
demonstrate the effectiveness of our proposed algorithms over state-of-the-art
methods, which led in up to 13%, 4%, and 13% improvement of our high-level
activities recognition metrics for Opportunity, LISSI, and PAMAP2 datasets,
respectively. The LISSI dataset is the most challenging one owing to its less
populated and imbalanced distribution. Compared to the SA-GAN adversarial
domain adaptation method, the proposed approach enhances the final
classification performance with an average of 7.5% for the three datasets,
which emphasizes the effectiveness of micro-mini-batch training.
Related papers
- Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - A new weakly supervised approach for ALS point cloud semantic
segmentation [1.4620086904601473]
We propose a deep-learning based weakly supervised framework for semantic segmentation of ALS point clouds.
We exploit potential information from unlabeled data subject to incomplete and sparse labels.
Our method achieves an overall accuracy of 83.0% and an average F1 score of 70.0%, which have increased by 6.9% and 12.8% respectively.
arXiv Detail & Related papers (2021-10-04T14:00:23Z) - Large-scale ASR Domain Adaptation using Self- and Semi-supervised
Learning [26.110250680951854]
We utilize the combination of self- and semi-supervised learning methods to solve unseen domain adaptation problem in a large-scale production setting for online ASR model.
This approach demonstrates that using the source domain data with a small fraction of the target domain data (3%) can recover the performance gap compared to a full data baseline.
arXiv Detail & Related papers (2021-10-01T01:48:33Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.