FedST: Secure Federated Shapelet Transformation for Time Series
Classification
- URL: http://arxiv.org/abs/2302.10631v3
- Date: Wed, 31 May 2023 13:35:17 GMT
- Title: FedST: Secure Federated Shapelet Transformation for Time Series
Classification
- Authors: Zhiyu Liang, Hongzhi Wang
- Abstract summary: This paper explores how to customize time series classification (TSC) methods with the help of external data in a privacy-preserving federated learning (FL) scenario.
We propose FedST, a novel FL-enabled TSC framework based on a shapelet transformation method.
We conduct extensive experiments using both synthetic and real-world datasets.
- Score: 5.249017312277057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores how to customize time series classification (TSC) methods
with the help of external data in a privacy-preserving federated learning (FL)
scenario. To the best of our knowledge, we are the first to study on this
essential topic. Achieving this goal requires us to seamlessly integrate the
techniques from multiple fields including Data Mining, Machine Learning, and
Security. In this paper, we systematically investigate existing TSC solutions
for the centralized scenario and propose FedST, a novel FL-enabled TSC
framework based on a shapelet transformation method. We recognize the federated
shapelet search step as the kernel of FedST. Thus, we design a basic protocol
for the FedST kernel that we prove to be secure and accurate. However, we
identify that the basic protocol suffers from efficiency bottlenecks and the
centralized acceleration techniques lose their efficacy due to the security
issues. To speed up the federated protocol with security guarantee, we propose
several optimizations tailored for the FL setting. Our theoretical analysis
shows that the proposed methods are secure and more efficient. We conduct
extensive experiments using both synthetic and real-world datasets. Empirical
results show that our FedST solution is effective in terms of TSC accuracy, and
the proposed optimizations can achieve three orders of magnitude of speedup.
Related papers
- Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - An Empirical Study of Efficiency and Privacy of Federated Learning
Algorithms [2.994794762377111]
In today's world, the rapid expansion of IoT networks and the proliferation of smart devices have resulted in the generation of substantial amounts of heterogeneous data.
To handle this data effectively, advanced data processing technologies are necessary to guarantee the preservation of both privacy and efficiency.
Federated learning emerged as a distributed learning method that trains models locally and aggregates them on a server to preserve data privacy.
arXiv Detail & Related papers (2023-12-24T00:13:41Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Byzantine-Robust Federated Learning with Optimal Statistical Rates and
Privacy Guarantees [123.0401978870009]
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates.
We benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
arXiv Detail & Related papers (2022-05-24T04:03:07Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification [22.579050671255846]
This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
arXiv Detail & Related papers (2022-02-15T03:23:57Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - Data-driven Optimal Power Flow: A Physics-Informed Machine Learning
Approach [6.5382276424254995]
This paper proposes a data-driven approach for optimal power flow (OPF) based on the stacked extreme learning machine (SELM) framework.
A data-driven OPF regression framework is developed that decomposes the OPF model features into three stages.
Numerical results carried out on IEEE and Polish benchmark systems demonstrate that the proposed method outperforms other alternatives.
arXiv Detail & Related papers (2020-05-31T15:41:24Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.