Deep Generative Domain Adaptation with Temporal Attention for Cross-User Activity Recognition
- URL: http://arxiv.org/abs/2403.17958v1
- Date: Tue, 12 Mar 2024 22:45:05 GMT
- Title: Deep Generative Domain Adaptation with Temporal Attention for Cross-User Activity Recognition
- Authors: Xiaozhou Ye, Kevin I-Kai Wang,
- Abstract summary: In Human Activity Recognition (HAR), a predominant assumption is that the data utilized for training and evaluation purposes are drawn from the same distribution.
This research presents the Deep Generative Domain Adaptation with Temporal Attention method.
By synergizing the capabilities of generative models with the Temporal Relation Attention mechanism, our method improves the classification performance in cross-user HAR.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Human Activity Recognition (HAR), a predominant assumption is that the data utilized for training and evaluation purposes are drawn from the same distribution. It is also assumed that all data samples are independent and identically distributed ($\displaystyle i.i.d.$). Contrarily, practical implementations often challenge this notion, manifesting data distribution discrepancies, especially in scenarios such as cross-user HAR. Domain adaptation is the promising approach to address these challenges inherent in cross-user HAR tasks. However, a clear gap in domain adaptation techniques is the neglect of the temporal relation embedded within time series data during the phase of aligning data distributions. Addressing this oversight, our research presents the Deep Generative Domain Adaptation with Temporal Attention (DGDATA) method. This novel method uniquely recognises and integrates temporal relations during the domain adaptation process. By synergizing the capabilities of generative models with the Temporal Relation Attention mechanism, our method improves the classification performance in cross-user HAR. A comprehensive evaluation has been conducted on three public sensor-based HAR datasets targeting different scenarios and applications to demonstrate the efficacy of the proposed DGDATA method.
Related papers
- Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation [50.31351006532924]
Human pose estimation (HPE) has received increasing attention recently due to its wide application in motion analysis, virtual reality, healthcare, etc.
It suffers from the lack of labeled diverse real-world datasets due to the time- and labor-intensive annotation.
We introduce a novel framework that capitalizes on both representation aggregation and segregation for domain adaptive human pose estimation.
arXiv Detail & Related papers (2024-12-29T17:59:45Z) - Deep Generative Domain Adaptation with Temporal Relation Knowledge for Cross-User Activity Recognition [0.0]
In human activity recognition, the assumption that training and testing data are independent and identically distributed (i.i.d.) often fails.
Our study introduces a Variational Autoencoder with Universal Sequence Mapping (CVAE-USM) approach, which addresses the unique challenges of time-series domain adaptation in HAR.
This method combines the strengths of Variational Autoencoder (VAE) and Universal Sequence Mapping (USM) to capture and utilize common temporal patterns between users for improved activity recognition.
arXiv Detail & Related papers (2024-03-12T22:48:23Z) - Cross-user activity recognition using deep domain adaptation with temporal relation information [1.8749305679160366]
Human Activity Recognition (HAR) is a cornerstone of ubiquitous computing.
This paper explores the cross-user HAR problem where behavioural variability across individuals results in differing data distributions.
We introduce the Deep Temporal State Domain Adaptation (DTSDA) model, an innovative approach tailored for time series domain adaptation in cross-user HAR.
arXiv Detail & Related papers (2024-03-12T22:38:09Z) - Cross-user activity recognition via temporal relation optimal transport [0.0]
Current research on human activity recognition (HAR) mainly assumes that training and testing data are drawn from the same distribution to achieve a generalised model.
We propose the temporal relation optimal transport (TROT) method to utilise temporal relation and relax the $displaystyle i.i.d. $ assumption.
arXiv Detail & Related papers (2024-03-12T22:33:56Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Domain Generalization for Activity Recognition via Adaptive Feature
Fusion [9.458837222079612]
We propose emphAdaptive Feature Fusion for Activity Recognition(AFFAR).
AFFAR learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.
We apply AFAR to a real application, i.e., the diagnosis of Children's Attention Deficit Hyperactivity Disorder(ADHD)
arXiv Detail & Related papers (2022-07-21T02:14:09Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.