Feature selection for gesture recognition in Internet-of-Things for
healthcare
- URL: http://arxiv.org/abs/2005.11031v1
- Date: Fri, 22 May 2020 06:54:53 GMT
- Title: Feature selection for gesture recognition in Internet-of-Things for
healthcare
- Authors: Giulia Cisotto, Martina Capuzzo, Anna V. Guglielmi, Andrea Zanella
- Abstract summary: In the context of recognition of gestures, EEG and EMG could be simultaneously recorded to identify the gesture that is being accomplished, and the quality of its performance.
This paper proposes a new algorithm that aims (i) to robustly extract the most relevant features to classify different grasping tasks, and (ii) to retain the natural meaning of the selected features.
- Score: 10.155382321743181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Internet of Things is rapidly spreading across several fields, including
healthcare, posing relevant questions related to communication capabilities,
energy efficiency and sensors unobtrusiveness. Particularly, in the context of
recognition of gestures, e.g., grasping of different objects, brain and
muscular activity could be simultaneously recorded via EEG and EMG,
respectively, and analyzed to identify the gesture that is being accomplished,
and the quality of its performance. This paper proposes a new algorithm that
aims (i) to robustly extract the most relevant features to classify different
grasping tasks, and (ii) to retain the natural meaning of the selected
features. This, in turn, gives the opportunity to simplify the recording setup
to minimize the data traffic over the communication network, including
Internet, and provide physiologically significant features for medical
interpretation. The algorithm robustness is ensured both by consensus
clustering as a feature selection strategy, and by nested cross-validation
scheme to evaluate its classification performance.
Related papers
- YOLO-MED : Multi-Task Interaction Network for Biomedical Images [18.535117490442953]
YOLO-Med is an efficient end-to-end multi-task network capable of concurrently performing object detection and semantic segmentation.
Our model exhibits promising results in balancing accuracy and speed when evaluated on the Kvasir-seg dataset and a private biomedical image dataset.
arXiv Detail & Related papers (2024-03-01T03:20:42Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - Human Haptic Gesture Interpretation for Robotic Systems [3.888848425698769]
Physical human-robot interactions (pHRI) are less efficient and communicative than human-human interactions.
A key reason is a lack of informative sense of touch in robotic systems.
This work presents four proposed touch gesture classes that cover the majority of the gesture characteristics identified in the literature.
arXiv Detail & Related papers (2020-12-03T14:33:57Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z) - Cross-modality Person re-identification with Shared-Specific Feature
Transfer [112.60513494602337]
Cross-modality person re-identification (cm-ReID) is a challenging but key technology for intelligent video analysis.
We propose a novel cross-modality shared-specific feature transfer algorithm (termed cm-SSFT) to explore the potential of both the modality-shared information and the modality-specific characteristics.
arXiv Detail & Related papers (2020-02-28T00:18:45Z) - Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for
Biomedical and Biological Images [91.41909587856104]
We present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work.
Our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features.
It outperforms several state-of-the-art methods on various biomedical and biological datasets.
arXiv Detail & Related papers (2020-02-15T09:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.