Cross-Domain HAR: Few Shot Transfer Learning for Human Activity
Recognition
- URL: http://arxiv.org/abs/2310.14390v1
- Date: Sun, 22 Oct 2023 19:13:25 GMT
- Title: Cross-Domain HAR: Few Shot Transfer Learning for Human Activity
Recognition
- Authors: Megha Thukral, Harish Haresamudram and Thomas Ploetz
- Abstract summary: We present an approach for economic use of publicly available labeled HAR datasets for effective transfer learning.
We introduce a novel transfer learning framework, Cross-Domain HAR, which follows the teacher-student self-training paradigm.
We demonstrate the effectiveness of our approach for practically relevant few shot activity recognition scenarios.
- Score: 0.2944538605197902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ubiquitous availability of smartphones and smartwatches with integrated
inertial measurement units (IMUs) enables straightforward capturing of human
activities. For specific applications of sensor based human activity
recognition (HAR), however, logistical challenges and burgeoning costs render
especially the ground truth annotation of such data a difficult endeavor,
resulting in limited scale and diversity of datasets. Transfer learning, i.e.,
leveraging publicly available labeled datasets to first learn useful
representations that can then be fine-tuned using limited amounts of labeled
data from a target domain, can alleviate some of the performance issues of
contemporary HAR systems. Yet they can fail when the differences between source
and target conditions are too large and/ or only few samples from a target
application domain are available, each of which are typical challenges in
real-world human activity recognition scenarios. In this paper, we present an
approach for economic use of publicly available labeled HAR datasets for
effective transfer learning. We introduce a novel transfer learning framework,
Cross-Domain HAR, which follows the teacher-student self-training paradigm to
more effectively recognize activities with very limited label information. It
bridges conceptual gaps between source and target domains, including sensor
locations and type of activities. Through our extensive experimental evaluation
on a range of benchmark datasets, we demonstrate the effectiveness of our
approach for practically relevant few shot activity recognition scenarios. We
also present a detailed analysis into how the individual components of our
framework affect downstream performance.
Related papers
- Automatic Identification and Visualization of Group Training Activities Using Wearable Data [7.130450173185638]
Human Activity Recognition (HAR) identifies daily activities from time-series data collected by wearable devices like smartwatches.
This paper presents a comprehensive framework for imputing, analyzing, and identifying activities from wearable data.
Our approach is based on data collected from 135 soldiers wearing Garmin 55 smartwatches over six months.
arXiv Detail & Related papers (2024-10-07T19:35:15Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Explainable Attention for Few-shot Learning and Beyond [7.044125601403848]
We introduce a novel framework for achieving explainable hard attention finding, specifically tailored for few-shot learning scenarios.
Our approach employs deep reinforcement learning to implement the concept of hard attention, directly impacting raw input data.
arXiv Detail & Related papers (2023-10-11T18:33:17Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation [6.458496335718508]
We propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED)
In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition.
arXiv Detail & Related papers (2022-09-14T11:08:48Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Single-Modal Entropy based Active Learning for Visual Question Answering [75.1682163844354]
We address Active Learning in the multi-modal setting of Visual Question Answering (VQA)
In light of the multi-modal inputs, image and question, we propose a novel method for effective sample acquisition.
Our novel idea is simple to implement, cost-efficient, and readily adaptable to other multi-modal tasks.
arXiv Detail & Related papers (2021-10-21T05:38:45Z) - Streaming Self-Training via Domain-Agnostic Unlabeled Images [62.57647373581592]
We present streaming self-training (SST) that aims to democratize the process of learning visual recognition models.
Key to SST are two crucial observations: (1) domain-agnostic unlabeled images enable us to learn better models with a few labeled examples without any additional knowledge or supervision; and (2) learning is a continuous process and can be done by constructing a schedule of learning updates.
arXiv Detail & Related papers (2021-04-07T17:58:39Z) - Contrastive Predictive Coding for Human Activity Recognition [5.766384728949437]
We introduce the Contrastive Predictive Coding framework to human activity recognition, which captures the long-term temporal structure of sensor data streams.
CPC-based pre-training is self-supervised, and the resulting learned representations can be integrated into standard activity chains.
It leads to significantly improved recognition performance when only small amounts of labeled training data are available.
arXiv Detail & Related papers (2020-12-09T21:44:36Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.