LSTMSPLIT: Effective SPLIT Learning based LSTM on Sequential Time-Series
Data
- URL: http://arxiv.org/abs/2203.04305v1
- Date: Tue, 8 Mar 2022 11:44:12 GMT
- Title: LSTMSPLIT: Effective SPLIT Learning based LSTM on Sequential Time-Series
Data
- Authors: Lianlian Jiang, Yuexuan Wang, Wenyi Zheng, Chao Jin, Zengxiang Li, Sin
G. Teo
- Abstract summary: We propose a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to classify time-series data with multiple clients.
The proposed method, LSTMSPLIT, has achieved better or reasonable accuracy compared to the Split-1DCNN method using the electrocardiogram dataset and the human activity recognition dataset.
- Score: 3.9011223632827385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) and split learning (SL) are the two popular
distributed machine learning (ML) approaches that provide some data privacy
protection mechanisms. In the time-series classification problem, many
researchers typically use 1D convolutional neural networks (1DCNNs) based on
the SL approach with a single client to reduce the computational overhead at
the client-side while still preserving data privacy. Another method, recurrent
neural network (RNN), is utilized on sequentially partitioned data where
segments of multiple-segment sequential data are distributed across various
clients. However, to the best of our knowledge, it is still not much work done
in SL with long short-term memory (LSTM) network, even the LSTM network is
practically effective in processing time-series data. In this work, we propose
a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to
classify time-series data with multiple clients. The differential privacy (DP)
is applied to solve the data privacy leakage. The proposed method, LSTMSPLIT,
has achieved better or reasonable accuracy compared to the Split-1DCNN method
using the electrocardiogram dataset and the human activity recognition dataset.
Furthermore, the proposed method, LSTMSPLIT, can also achieve good accuracy
after applying differential privacy to preserve the user privacy of the cut
layer of the LSTMSPLIT.
Related papers
- Multi-Scale Convolutional LSTM with Transfer Learning for Anomaly Detection in Cellular Networks [1.1432909951914676]
This study introduces a novel approach Multi-Scale Convolutional LSTM with Transfer Learning (TL) to detect anomalies in cellular networks.
The model is initially trained from scratch using a publicly available dataset to learn typical network behavior.
We compare the performance of the model trained from scratch with that of the fine-tuned model using TL.
arXiv Detail & Related papers (2024-09-30T17:51:54Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - On Pretraining Data Diversity for Self-Supervised Learning [57.91495006862553]
We explore the impact of training with more diverse datasets on the performance of self-supervised learning (SSL) under a fixed computational budget.
Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal.
arXiv Detail & Related papers (2024-03-20T17:59:58Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks [7.291687946822538]
Continual learning on sequential data is critical for many machine learning (ML) deployments.
LSTM networks suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually.
We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways.
arXiv Detail & Related papers (2023-05-26T20:17:18Z) - Distributed LSTM-Learning from Differentially Private Label Proportions [0.9281671380673306]
We will propose two efficient models which use Differential Privacy and decentralized LSTM-Learning: One.
The evaluation will show the tradeoff between performance and data privacy.
arXiv Detail & Related papers (2023-01-15T22:11:07Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - FedSL: Federated Split Learning on Distributed Sequential Data in
Recurrent Neural Networks [4.706263507340607]
Federated Learning (FL) and Split Learning (SL) are privacy-preserving Machine-Learning (ML) techniques.
Existing FL and SL approaches work on horizontally or vertically partitioned data.
We propose a novel federated split learning framework, FedSL, to train models on distributed sequential data.
arXiv Detail & Related papers (2020-11-06T04:00:39Z) - Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone [5.927250637620123]
We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
arXiv Detail & Related papers (2020-08-10T08:34:20Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.