SuDA: Support-based Domain Adaptation for Sim2Real Motion Capture with Flexible Sensors
- URL: http://arxiv.org/abs/2405.16152v1
- Date: Sat, 25 May 2024 09:43:33 GMT
- Title: SuDA: Support-based Domain Adaptation for Sim2Real Motion Capture with Flexible Sensors
- Authors: Jiawei Fang, Haishan Song, Chengxu Zuo, Xiaoxia Gao, Xiaowei Chen, Shihui Guo, Yipeng Qin,
- Abstract summary: Existing flexible sensor-based MoCap methods rely on deep learning and necessitate large and diverse labeled datasets for training.
Thanks to the high-linearity of flexible sensors, we propose a novel Sim2Real Mocap solution based on domain adaptation.
Our solution relies on a novel Support-based Domain Adaptation method, namely SuDA, which aligns the supports of the predictive functions.
- Score: 12.811669078348489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flexible sensors hold promise for human motion capture (MoCap), offering advantages such as wearability, privacy preservation, and minimal constraints on natural movement. However, existing flexible sensor-based MoCap methods rely on deep learning and necessitate large and diverse labeled datasets for training. These data typically need to be collected in MoCap studios with specialized equipment and substantial manual labor, making them difficult and expensive to obtain at scale. Thanks to the high-linearity of flexible sensors, we address this challenge by proposing a novel Sim2Real Mocap solution based on domain adaptation, eliminating the need for labeled data yet achieving comparable accuracy to supervised learning. Our solution relies on a novel Support-based Domain Adaptation method, namely SuDA, which aligns the supports of the predictive functions rather than the instance-dependent distributions between the source and target domains. Extensive experimental results demonstrate the effectiveness of our method andits superiority over state-of-the-art distribution-based domain adaptation methods in our task.
Related papers
- FlexiMo: A Flexible Remote Sensing Foundation Model [33.027094254412056]
FlexiMo is a flexible remote sensing foundation model that endows the pre-trained model with the flexibility to adapt to arbitrary spatial resolutions.
Central to FlexiMo is a spatial resolution-aware module that employs a parameter-free alignment embedding mechanism.
Experiments on diverse multimodal, multi-resolution, and multi-scale datasets demonstrate that FlexiMo significantly enhances model generalization and robustness.
arXiv Detail & Related papers (2025-03-31T08:46:05Z) - Towards Practical Emotion Recognition: An Unsupervised Source-Free Approach for EEG Domain Adaptation [0.5755004576310334]
We propose a novel SF-UDA approach for EEG-based emotion classification across domains.
We introduce Dual-Loss Adaptive Regularization (DLAR) to minimize prediction discrepancies and align predictions with expected pseudo-labels.
Our approach significantly outperforms state-of-the-art methods, achieving 65.84% accuracy when trained on DEAP and tested on SEED.
arXiv Detail & Related papers (2025-03-26T14:29:20Z) - Propensity-driven Uncertainty Learning for Sample Exploration in Source-Free Active Domain Adaptation [19.620523416385346]
Source-free active domain adaptation (SFADA) addresses the challenge of adapting a pre-trained model to new domains without access to source data.
This scenario is particularly relevant in real-world applications where data privacy, storage limitations, or labeling costs are significant concerns.
We propose the Propensity-driven Uncertainty Learning (ProULearn) framework to effectively select more informative samples without frequently requesting human annotations.
arXiv Detail & Related papers (2025-01-23T10:05:25Z) - Efficient Unsupervised Domain Adaptation Regression for Spatial-Temporal Sensor Fusion [6.963971634605796]
Low-cost, distributed sensor networks in environmental and biomedical domains have enabled continuous, large-scale health monitoring.<n>These systems often face challenges related to degraded data quality caused by sensor drift, noise, and insufficient calibration.<n>Traditional machine learning methods for sensor fusion and calibration rely on extensive feature engineering.<n>We propose a novel unsupervised domain adaptation (UDA) method tailored for regression tasks.
arXiv Detail & Related papers (2024-11-11T12:20:57Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - EarDA: Towards Accurate and Data-Efficient Earable Activity Sensing [3.3690293278790415]
Earable devices show significant changes in amplitudes and patterns, especially in the presence of dynamic and unpredictable head movements.
We present EarDA, an adversarial-based domain adaptation system to extract the domain-independent features across different sensor locations.
It achieves an accuracy of 88.8% under Human Activity Recognition task, demonstrating a significant 43% improvement over methods without domain adaptation.
arXiv Detail & Related papers (2024-06-18T12:13:43Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR [25.606795179822885]
We propose CODA, a COst-efficient Domain Adaptation mechanism for mobile sensing.
CODA addresses real-time drifts from the data distribution perspective with active learning theory.
We demonstrate the feasibility and potential of online adaptation with CODA.
arXiv Detail & Related papers (2024-03-22T02:50:42Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Exploring Few-Shot Adaptation for Activity Recognition on Diverse Domains [46.26074225989355]
Domain adaptation is essential for activity recognition to ensure accurate and robust performance across diverse environments.
In this work, we focus on FewShot Domain Adaptation for Activity Recognition (FSDA-AR), which leverages a very small amount of labeled target videos.
We propose a new FSDA-AR using five established datasets considering the adaptation on more diverse and challenging domains.
arXiv Detail & Related papers (2023-05-15T08:01:05Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Continuous Domain Adaptation with Variational Domain-Agnostic Feature
Replay [78.7472257594881]
Learning in non-stationary environments is one of the biggest challenges in machine learning.
Non-stationarity can be caused by either task drift, or the domain drift.
We propose variational domain-agnostic feature replay, an approach that is composed of three components.
arXiv Detail & Related papers (2020-03-09T19:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.