Simple Yet Surprisingly Effective Training Strategies for LSTMs in
Sensor-Based Human Activity Recognition
- URL: http://arxiv.org/abs/2212.13918v1
- Date: Fri, 23 Dec 2022 09:17:01 GMT
- Title: Simple Yet Surprisingly Effective Training Strategies for LSTMs in
Sensor-Based Human Activity Recognition
- Authors: Shuai Shao, Yu Guan, Xin Guan, Paolo Missier, Thomas Ploetz
- Abstract summary: This paper studies some LSTM training strategies for sporadic activity recognition.
We propose two simple yet effective LSTM variants, namely delay model and inverse model, for two SAR scenarios.
The promising results demonstrated the effectiveness of our approaches in HAR applications.
- Score: 14.95985947077388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human Activity Recognition (HAR) is one of the core research areas in mobile
and wearable computing. With the application of deep learning (DL) techniques
such as CNN, recognizing periodic or static activities (e.g, walking, lying,
cycling, etc.) has become a well studied problem. What remains a major
challenge though is the sporadic activity recognition (SAR) problem, where
activities of interest tend to be non periodic, and occur less frequently when
compared with the often large amount of irrelevant background activities.
Recent works suggested that sequential DL models (such as LSTMs) have great
potential for modeling nonperiodic behaviours, and in this paper we studied
some LSTM training strategies for SAR. Specifically, we proposed two simple yet
effective LSTM variants, namely delay model and inverse model, for two SAR
scenarios (with and without time critical requirement). For time critical SAR,
the delay model can effectively exploit predefined delay intervals (within
tolerance) in form of contextual information for improved performance. For
regular SAR task, the second proposed, inverse model can learn patterns from
the time series in an inverse manner, which can be complementary to the forward
model (i.e.,LSTM), and combining both can boost the performance. These two LSTM
variants are very practical, and they can be deemed as training strategies
without alteration of the LSTM fundamentals. We also studied some additional
LSTM training strategies, which can further improve the accuracy. We evaluated
our models on two SAR and one non-SAR datasets, and the promising results
demonstrated the effectiveness of our approaches in HAR applications.
Related papers
- CSPLADE: Learned Sparse Retrieval with Causal Language Models [12.930248566238243]
We identify two challenges in training large language models (LLM) for Learned sparse retrieval (LSR)
We propose two corresponding techniques: (1) a lightweight adaptation training phase to eliminate training instability; (2) two model variants to enable bidirectional information.
With these techniques, we are able to train LSR models with 8B scale LLM, and achieve competitive retrieval performance with reduced index size.
arXiv Detail & Related papers (2025-04-15T02:31:34Z) - Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute [55.330813919992465]
This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.
Our strategy builds upon the repeated-sampling-then-voting framework, with a novel twist: incorporating multiple models, even weaker ones, to leverage their complementary strengths.
arXiv Detail & Related papers (2025-04-01T13:13:43Z) - If an LLM Were a Character, Would It Know Its Own Story? Evaluating Lifelong Learning in LLMs [55.8331366739144]
We introduce LIFESTATE-BENCH, a benchmark designed to assess lifelong learning in large language models (LLMs)
Our fact checking evaluation probes models' self-awareness, episodic memory retrieval, and relationship tracking, across both parametric and non-parametric approaches.
arXiv Detail & Related papers (2025-03-30T16:50:57Z) - Transfer Learning with Foundational Models for Time Series Forecasting using Low-Rank Adaptations [0.0]
This study proposes the methodology LLIAM, a straightforward adaptation of a kind of FM, Large Language Models, for the Time Series Forecasting task.
A comparison was made between the performance of LLIAM and different state-of-the-art DL algorithms, including Recurrent Neural Networks and Temporal Convolutional Networks, as well as a LLM-based method, TimeLLM.
The outcomes of this investigation demonstrate the efficacy of LLIAM, highlighting that this straightforward and general approach can attain competent results without the necessity for applying complex modifications.
arXiv Detail & Related papers (2024-10-15T12:14:01Z) - Unlocking the Power of LSTM for Long Term Time Series Forecasting [27.245021350821638]
We propose a simple yet efficient algorithm named P-sLSTM built upon sLSTM by incorporating patching and channel independence.
These modifications substantially enhance sLSTM's performance in TSF, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-08-19T13:59:26Z) - Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST)
Activation Under Data Constraints [0.0]
We propose squared Sigmoid TanH (SST) activation specifically tailored to enhance the learning capability of sequential models under data constraints.
SST applies mathematical squaring to amplify differences between strong and weak activations as signals propagate over time.
We evaluate SST-powered LSTMs and GRUs for diverse applications, such as sign language recognition, regression, and time-series classification tasks.
arXiv Detail & Related papers (2024-02-14T09:20:13Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Can recurrent neural networks learn process model structure? [0.2580765958706854]
We introduce an evaluation framework that combines variant-based resampling and custom metrics for fitness, precision and generalization.
We confirm that LSTMs can struggle to learn process model structure, even with simplistic process data.
We also found that decreasing the amount of information seen by the LSTM during training, causes a sharp drop in generalization and precision scores.
arXiv Detail & Related papers (2022-12-13T08:40:01Z) - Learning Mixtures of Linear Dynamical Systems [94.49754087817931]
We develop a two-stage meta-algorithm to efficiently recover each ground-truth LDS model up to error $tildeO(sqrtd/T)$.
We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm.
arXiv Detail & Related papers (2022-01-26T22:26:01Z) - A journey in ESN and LSTM visualisations on a language task [77.34726150561087]
We trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task.
The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space.
arXiv Detail & Related papers (2020-12-03T08:32:01Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Achieving Online Regression Performance of LSTMs with Simple RNNs [0.0]
We introduce a first-order training algorithm with a linear time complexity in the number of parameters.
We show that when SRNNs are trained with our algorithm, they provide very similar regression performance with the LSTMs in two to three times shorter training time.
arXiv Detail & Related papers (2020-05-16T11:41:13Z) - Sentiment Analysis Using Simplified Long Short-term Memory Recurrent
Neural Networks [1.5146765382501612]
We perform sentiment analysis on a GOP Debate Twitter dataset.
To speed up training and reduce the computational cost and time, six different parameter reduced slim versions of the LSTM model are proposed.
arXiv Detail & Related papers (2020-05-08T12:50:10Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.