Inter-Subject Variance Transfer Learning for EMG Pattern Classification Based on Bayesian Inference
- URL: http://arxiv.org/abs/2505.15381v1
- Date: Wed, 21 May 2025 11:18:39 GMT
- Title: Inter-Subject Variance Transfer Learning for EMG Pattern Classification Based on Bayesian Inference
- Authors: Seitaro Yoneda, Akira Furui,
- Abstract summary: In electromyogram (EMG)-based motion recognition, a subject-specific classifier is typically trained with sufficient labeled data.<n>This paper proposes an inter-subject variance transfer learning method based on a Bayesian approach.
- Score: 2.209921757303168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In electromyogram (EMG)-based motion recognition, a subject-specific classifier is typically trained with sufficient labeled data. However, this process demands extensive data collection over extended periods, burdening the subject. To address this, utilizing information from pre-training on multiple subjects for the training of the target subject could be beneficial. This paper proposes an inter-subject variance transfer learning method based on a Bayesian approach. This method is founded on the simple hypothesis that while the means of EMG features vary greatly across subjects, their variances may exhibit similar patterns. Our approach transfers variance information, acquired through pre-training on multiple source subjects, to a target subject within a Bayesian updating framework, thereby allowing accurate classification using limited target calibration data. A coefficient was also introduced to adjust the amount of information transferred for efficient transfer learning. Experimental evaluations using two EMG datasets demonstrated the effectiveness of our variance transfer strategy and its superiority compared to existing methods.
Related papers
- Transductive Model Selection under Prior Probability Shift [49.56191463229252]
Transductive learning is a supervised machine learning task in which the unlabelled data that require labelling are a finite set and are available at training time.<n>We propose a method, tailored to transductive classification contexts, for performing model selection when the data exhibit prior probability shift.
arXiv Detail & Related papers (2025-07-30T13:03:24Z) - Improving the Evaluation and Actionability of Explanation Methods for Multivariate Time Series Classification [4.588028371034407]
We focus on analyzing InterpretTime, a recent evaluation methodology for attribution methods applied to MTSC.
We showcase some significant weaknesses of the original methodology and propose ideas to improve its accuracy and efficiency.
We find that perturbation-based methods such as SHAP and Feature Ablation work well across a set of datasets.
arXiv Detail & Related papers (2024-06-18T11:18:46Z) - Task-customized Masked AutoEncoder via Mixture of Cluster-conditional
Experts [104.9871176044644]
Masked Autoencoder(MAE) is a prevailing self-supervised learning method that achieves promising results in model pre-training.
We propose a novel MAE-based pre-training paradigm, Mixture of Cluster-conditional Experts (MoCE)
MoCE trains each expert only with semantically relevant images by using cluster-conditional gates.
arXiv Detail & Related papers (2024-02-08T03:46:32Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - A Variational Bayesian Approach to Learning Latent Variables for
Acoustic Knowledge Transfer [55.20627066525205]
We propose a variational Bayesian (VB) approach to learning distributions of latent variables in deep neural network (DNN) models.
Our proposed VB approach can obtain good improvements on target devices, and consistently outperforms 13 state-of-the-art knowledge transfer algorithms.
arXiv Detail & Related papers (2021-10-16T15:54:01Z) - Learning Robust Variational Information Bottleneck with Reference [12.743882133781598]
We propose a new approach to train a variational information bottleneck (VIB) that improves its robustness to adversarial perturbations.
We refine the categorical class information in the training phase with soft labels which are obtained from a pre-trained reference neural network.
arXiv Detail & Related papers (2021-04-29T14:46:09Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Bayesian Few-Shot Classification with One-vs-Each P\'olya-Gamma
Augmented Gaussian Processes [7.6146285961466]
Few-shot classification (FSC) is an important step on the path toward human-like machine learning.
We propose a novel combination of P'olya-Gamma augmentation and the one-vs-each softmax approximation that allows us to efficiently marginalize over functions rather than model parameters.
We demonstrate improved accuracy and uncertainty quantification on both standard few-shot classification benchmarks and few-shot domain transfer tasks.
arXiv Detail & Related papers (2020-07-20T19:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.