Transfer Learning on Electromyography (EMG) Tasks: Approaches and Beyond
- URL: http://arxiv.org/abs/2210.06295v2
- Date: Thu, 13 Oct 2022 03:41:11 GMT
- Title: Transfer Learning on Electromyography (EMG) Tasks: Approaches and Beyond
- Authors: Di Wu and Jie Yang and Mohamad Sawan
- Abstract summary: This survey aims to provide an insight into the biological foundations of existing transfer learning methods on EMG-related analysis.
We first introduce the physiological structure of the muscles and the EMG generating mechanism.
We categorize existing research endeavors into data based, model based, training scheme based, and adversarial based.
- Score: 8.167024471353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning on electromyography (EMG) has recently achieved remarkable
success on a variety of tasks, while such success relies heavily on the
assumption that the training and future data must be of the same data
distribution. However, this assumption may not hold in many real-world
applications. Model calibration is required via data re-collection and label
annotation, which is generally very expensive and time-consuming. To address
this problem, transfer learning (TL), which aims to improve target learners'
performance by transferring the knowledge from related source domains, is
emerging as a new paradigm to reduce the amount of calibration effort. In this
survey, we assess the eligibility of more than fifty published peer-reviewed
representative transfer learning approaches for EMG applications. Unlike
previous surveys on purely transfer learning or EMG-based machine learning,
this survey aims to provide an insight into the biological foundations of
existing transfer learning methods on EMG-related analysis. In specific, we
first introduce the physiological structure of the muscles and the EMG
generating mechanism, and the recording of EMG to provide biological insights
behind existing transfer learning approaches. Further, we categorize existing
research endeavors into data based, model based, training scheme based, and
adversarial based. This survey systematically summarizes and categorizes
existing transfer learning approaches for EMG related machine learning
applications. In addition, we discuss possible drawbacks of existing works and
point out the future direction of better EMG transfer learning algorithms to
enhance practicality for real-world applications.
Related papers
- Machine Learning Innovations in CPR: A Comprehensive Survey on Enhanced Resuscitation Techniques [52.71395121577439]
This survey paper explores the transformative role of Machine Learning (ML) and Artificial Intelligence (AI) in Cardiopulmonary Resuscitation (CPR)
It highlights the impact of predictive modeling, AI-enhanced devices, and real-time data analysis in improving resuscitation outcomes.
The paper provides a comprehensive overview, classification, and critical analysis of current applications, challenges, and future directions in this emerging field.
arXiv Detail & Related papers (2024-11-03T18:01:50Z) - Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey [51.87875066383221]
This paper introduces fundamental concepts, traditional methods, and benchmark datasets, then examine the various roles Machine Learning plays in improving CFD.
We highlight real-world applications of ML for CFD in critical scientific and engineering disciplines, including aerodynamics, combustion, atmosphere & ocean science, biology fluid, plasma, symbolic regression, and reduced order modeling.
We draw the conclusion that ML is poised to significantly transform CFD research by enhancing simulation accuracy, reducing computational time, and enabling more complex analyses of fluid dynamics.
arXiv Detail & Related papers (2024-08-22T07:33:11Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Knowledge-guided EEG Representation Learning [27.8095014391814]
Self-supervised learning has produced impressive results in multimedia domains of audio, vision and speech.
We propose a self-supervised model for EEG, which provides robust performance and remarkable parameter efficiency.
We also propose a novel knowledge-guided pre-training objective that accounts for the idiosyncrasies of the EEG signal.
arXiv Detail & Related papers (2024-02-15T01:52:44Z) - Online Transfer Learning for RSV Case Detection [6.3076606245690385]
We introduce Multi-Source Adaptive Weighting (MSAW), an online multi-source transfer learning method.
MSAW integrates a dynamic weighting mechanism into an ensemble framework, enabling automatic adjustment of weights.
We demonstrate the effectiveness of MSAW by applying it to detect Respiratory Syncytial Virus cases within Emergency Department visits.
arXiv Detail & Related papers (2024-02-03T02:13:08Z) - EEGFormer: Towards Transferable and Interpretable Large-Scale EEG
Foundation Model [39.363511340878624]
We present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data.
To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings.
arXiv Detail & Related papers (2024-01-11T17:36:24Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Deep Transfer-Learning for patient specific model re-calibration:
Application to sEMG-Classification [0.2676349883103404]
Machine learning based sEMG decoders are either trained on subject-specific data, or at least recalibrated for each user, individually.
Due to the limited amount of availability of sEMG data, the deep learning models are prone to overfitting.
Recently, transfer learning for domain adaptation improved generalization quality with reduced training time.
arXiv Detail & Related papers (2021-12-30T11:35:53Z) - Transformers for prompt-level EMA non-response prediction [62.41658786277712]
Ecological Momentary Assessments (EMAs) are an important psychological data source for measuring cognitive states, affect, behavior, and environmental factors.
Non-response, in which participants fail to respond to EMA prompts, is an endemic problem.
The ability to accurately predict non-response could be utilized to improve EMA delivery and develop compliance interventions.
arXiv Detail & Related papers (2021-11-01T18:38:47Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - On the application of transfer learning in prognostics and health
management [0.0]
Data availability has encouraged researchers and industry practitioners to rely on data-based machine learning.
Deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data.
transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application.
arXiv Detail & Related papers (2020-07-03T23:35:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.