Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification
- URL: http://arxiv.org/abs/2404.15279v1
- Date: Sun, 21 Jan 2024 03:47:57 GMT
- Title: Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification
- Authors: Jimmy Lin, Junkai Li, Jiasi Gao, Weizhi Ma, Yang Liu,
- Abstract summary: Tactile signals collected by wearable electronics are essential in modeling and understanding human behavior.
Existing action classification methods fail to capture the spatial and temporal features of tactile signals simultaneously.
We propose S-Temporal Aware Aware Transformer (STAT) to utilize continuous tactile signals for action classification.
- Score: 50.63919418371698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tactile signals collected by wearable electronics are essential in modeling and understanding human behavior. One of the main applications of tactile signals is action classification, especially in healthcare and robotics. However, existing tactile classification methods fail to capture the spatial and temporal features of tactile signals simultaneously, which results in sub-optimal performances. In this paper, we design Spatio-Temporal Aware tactility Transformer (STAT) to utilize continuous tactile signals for action classification. We propose spatial and temporal embeddings along with a new temporal pretraining task in our model, which aims to enhance the transformer in modeling the spatio-temporal features of tactile signals. Specially, the designed temporal pretraining task is to differentiate the time order of tubelet inputs to model the temporal properties explicitly. Experimental results on a public action classification dataset demonstrate that our model outperforms state-of-the-art methods in all metrics.
Related papers
- Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST)
Activation Under Data Constraints [0.0]
We propose squared Sigmoid TanH (SST) activation specifically tailored to enhance the learning capability of sequential models under data constraints.
SST applies mathematical squaring to amplify differences between strong and weak activations as signals propagate over time.
We evaluate SST-powered LSTMs and GRUs for diverse applications, such as sign language recognition, regression, and time-series classification tasks.
arXiv Detail & Related papers (2024-02-14T09:20:13Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action Recognition [50.064502884594376]
We study the problem of human action recognition using motion capture (MoCap) sequences.
We propose a novel Spatial-Temporal Mesh Transformer (STMT) to directly model the mesh sequences.
The proposed method achieves state-of-the-art performance compared to skeleton-based and point-cloud-based models.
arXiv Detail & Related papers (2023-03-31T16:19:27Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Sequence-to-Sequence Modeling for Action Identification at High Temporal
Resolution [9.902223920743872]
We introduce a new action-recognition benchmark that includes subtle short-duration actions labeled at a high temporal resolution.
We show that current state-of-the-art models based on segmentation produce noisy predictions when applied to these data.
We propose a novel approach for high-resolution action identification, inspired by speech-recognition techniques.
arXiv Detail & Related papers (2021-11-03T21:06:36Z) - Learn to cycle: Time-consistent feature discovery for action recognition [83.43682368129072]
Generalizing over temporal variations is a prerequisite for effective action recognition in videos.
We introduce Squeeze Re Temporal Gates (SRTG), an approach that favors temporal activations with potential variations.
We show consistent improvement when using SRTPG blocks, with only a minimal increase in the number of GFLOs.
arXiv Detail & Related papers (2020-06-15T09:36:28Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z) - On Feature Normalization and Data Augmentation [55.115583969831]
Moment Exchange encourages the model to utilize the moment information also for recognition models.
We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels.
As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation approaches.
arXiv Detail & Related papers (2020-02-25T18:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.