PMP: Privacy-Aware Matrix Profile against Sensitive Pattern Inference
for Time Series
- URL: http://arxiv.org/abs/2301.01838v1
- Date: Wed, 4 Jan 2023 22:11:38 GMT
- Title: PMP: Privacy-Aware Matrix Profile against Sensitive Pattern Inference
for Time Series
- Authors: Li Zhang, Jiahao Ding, Yifeng Gao, Jessica Lin
- Abstract summary: We propose a new privacy-preserving problem: preventing malicious inference on long shape-based patterns.
We find that while Matrix Profile (MP) can prevent concrete shape leakage, the canonical correlation in MP index can still reveal the location of sensitive long pattern.
We propose a Privacy-Aware Matrix Profile (PMP) via perturbing the local correlation and breaking the canonical correlation in MP index vector.
- Score: 12.855499575586753
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent rapid development of sensor technology has allowed massive
fine-grained time series (TS) data to be collected and set the foundation for
the development of data-driven services and applications. During the process,
data sharing is often involved to allow the third-party modelers to perform
specific time series data mining (TSDM) tasks based on the need of data owner.
The high resolution of TS brings new challenges in protecting privacy. While
meaningful information in high-resolution TS shifts from concrete point values
to local shape-based segments, numerous research have found that long
shape-based patterns could contain more sensitive information and may
potentially be extracted and misused by a malicious third party. However, the
privacy issue for TS patterns is surprisingly seldom explored in
privacy-preserving literature. In this work, we consider a new
privacy-preserving problem: preventing malicious inference on long shape-based
patterns while preserving short segment information for the utility task
performance. To mitigate the challenge, we investigate an alternative approach
by sharing Matrix Profile (MP), which is a non-linear transformation of
original data and a versatile data structure that supports many data mining
tasks. We found that while MP can prevent concrete shape leakage, the canonical
correlation in MP index can still reveal the location of sensitive long
pattern. Based on this observation, we design two attacks named Location Attack
and Entropy Attack to extract the pattern location from MP. To further protect
MP from these two attacks, we propose a Privacy-Aware Matrix Profile (PMP) via
perturbing the local correlation and breaking the canonical correlation in MP
index vector. We evaluate our proposed PMP against baseline noise-adding
methods through quantitative analysis and real-world case studies to show the
effectiveness of the proposed method.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - A Trajectory K-Anonymity Model Based on Point Density and Partition [0.0]
This paper develops a trajectory K-anonymity model based on Point Density and Partition (K PDP)
It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset.
arXiv Detail & Related papers (2023-07-31T17:10:56Z) - Calculating the matrix profile from noisy data [3.236217153362305]
The matrix profile (MP) is a data structure computed from a time series which encodes the data required to locate motifs and discords.
We measure the similarities between the MP from original time series data with MPs generated from the same data with noisy data added.
Results suggest that MP generation is resilient to a small amount of noise being introduced into the data but as the amount of noise increases this resilience disappears.
arXiv Detail & Related papers (2023-06-16T19:41:07Z) - SEAM: Searching Transferable Mixed-Precision Quantization Policy through
Large Margin Regularization [50.04951511146338]
Mixed-precision quantization (MPQ) suffers from the time-consuming process of searching the optimal bit-width allocation for each layer.
This paper proposes a novel method for efficiently searching for effective MPQ policies using a small proxy dataset.
arXiv Detail & Related papers (2023-02-14T05:47:45Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Hide-and-Seek Privacy Challenge [88.49671206936259]
The NeurIPS 2020 Hide-and-Seek Privacy Challenge is a novel two-tracked competition to accelerate progress in tackling both problems.
In our head-to-head format, participants in the synthetic data generation track (i.e. "hiders") and the patient re-identification track (i.e. "seekers") are directly pitted against each other by way of a new, high-quality intensive care time-series dataset.
arXiv Detail & Related papers (2020-07-23T15:50:59Z) - Privacy-Aware Time-Series Data Sharing with Deep Reinforcement Learning [33.42328078385098]
We study the privacy-utility trade-off (PUT) in time-series data sharing.
Methods that preserve the privacy for the current time may leak significant amount of information at the trace level.
We consider sharing the distorted version of a user's true data sequence with an untrusted third party.
arXiv Detail & Related papers (2020-03-04T18:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.