Exoskeleton-Based Multimodal Action and Movement Recognition:
Identifying and Developing the Optimal Boosted Learning Approach
- URL: http://arxiv.org/abs/2106.10331v1
- Date: Fri, 18 Jun 2021 19:43:54 GMT
- Title: Exoskeleton-Based Multimodal Action and Movement Recognition:
Identifying and Developing the Optimal Boosted Learning Approach
- Authors: Nirmalya Thakur and Chia Y. Han
- Abstract summary: This paper makes two scientific contributions to the field of exoskeleton-based action and movement recognition.
It presents a novel machine learning and pattern recognition-based framework that can detect a wide range of actions and movements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper makes two scientific contributions to the field of
exoskeleton-based action and movement recognition. First, it presents a novel
machine learning and pattern recognition-based framework that can detect a wide
range of actions and movements - walking, walking upstairs, walking downstairs,
sitting, standing, lying, stand to sit, sit to stand, sit to lie, lie to sit,
stand to lie, and lie to stand, with an overall accuracy of 82.63%. Second, it
presents a comprehensive comparative study of different learning approaches -
Random Forest, Artificial Neural Network, Decision Tree, Multiway Decision
Tree, Support Vector Machine, k-NN, Gradient Boosted Trees, Decision Stump,
Auto MLP, Linear Regression, Vector Linear Regression, Random Tree, Na\"ive
Bayes, Na\"ive Bayes (Kernel), Linear Discriminant Analysis, Quadratic
Discriminant Analysis, and Deep Learning applied to this framework. The
performance of each of these learning approaches was boosted by using the
AdaBoost algorithm, and the Cross Validation approach was used for training and
testing. The results show that in boosted form, the k- NN classifier
outperforms all the other boosted learning approaches and is, therefore, the
optimal learning method for this purpose. The results presented and discussed
uphold the importance of this work to contribute towards augmenting the
abilities of exoskeleton-based assisted and independent living of the elderly
in the future of Internet of Things-based living environments, such as Smart
Homes. As a specific use case, we also discuss how the findings of our work are
relevant for augmenting the capabilities of the Hybrid Assistive Limb
exoskeleton, a highly functional lower limb exoskeleton.
Related papers
- Skeleton2vec: A Self-supervised Learning Framework with Contextualized
Target Representations for Skeleton Sequence [56.092059713922744]
We show that using high-level contextualized features as prediction targets can achieve superior performance.
Specifically, we propose Skeleton2vec, a simple and efficient self-supervised 3D action representation learning framework.
Our proposed Skeleton2vec outperforms previous methods and achieves state-of-the-art results.
arXiv Detail & Related papers (2024-01-01T12:08:35Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Video-based Contrastive Learning on Decision Trees: from Action
Recognition to Autism Diagnosis [17.866016075963437]
We present a new contrastive learning-based framework for decision tree-based classification of actions.
The key idea is to translate the original multi-class action recognition into a series of binary classification tasks on a pre-constructed decision tree.
We have demonstrated the promising performance of video-based autism spectrum disorder diagnosis on the CalTech interview video database.
arXiv Detail & Related papers (2023-04-20T04:02:04Z) - Skeleton-based Human Action Recognition via Convolutional Neural
Networks (CNN) [4.598337780022892]
Most state-of-the-art contributions in skeleton-based action recognition incorporate a Graph Neural Network (GCN) architecture for representing the human body and extracting features.
Our research demonstrates that Convolutional Neural Networks (CNNs) can attain comparable results to GCN, provided that the proper training techniques, augmentations, and augmentations are applied.
arXiv Detail & Related papers (2023-01-31T01:26:17Z) - Machine Learning Approach for Predicting Students Academic Performance
and Study Strategies based on their Motivation [0.0]
This research aims to develop machine learning models for students academic performance and study strategies prediction.
Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) essential for students learning process were used in building the models.
arXiv Detail & Related papers (2022-10-15T04:09:05Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - ROIAL: Region of Interest Active Learning for Characterizing Exoskeleton
Gait Preference Landscapes [64.87637128500889]
Region of Interest Active Learning (ROIAL) framework actively learns each user's underlying utility function over a region of interest.
ROIAL learns from ordinal and preference feedback, which are more reliable feedback mechanisms than absolute numerical scores.
Results demonstrate the feasibility of recovering gait utility landscapes from limited human trials.
arXiv Detail & Related papers (2020-11-09T22:45:58Z) - MS$^2$L: Multi-Task Self-Supervised Learning for Skeleton Based Action
Recognition [36.74293548921099]
We integrate motion prediction, jigsaw puzzle recognition, and contrastive learning to learn skeleton features from different aspects.
Our experiments on the NW-UCLA, NTU RGB+D, and PKUMMD datasets show remarkable performance for action recognition.
arXiv Detail & Related papers (2020-10-12T11:09:44Z) - Automatic Gesture Recognition in Robot-assisted Surgery with
Reinforcement Learning and Tree Search [63.07088785532908]
We propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification.
Our framework consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score.
arXiv Detail & Related papers (2020-02-20T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.