You Are How You Walk: Quantifying Privacy Risks in Step Count Data
- URL: http://arxiv.org/abs/2308.04933v1
- Date: Wed, 9 Aug 2023 13:06:13 GMT
- Title: You Are How You Walk: Quantifying Privacy Risks in Step Count Data
- Authors: Bartlomiej Surma and Tahleen Rahman, Monique Breteler, Michael Backes
and Yang Zhang
- Abstract summary: We perform the first systematic study on quantifying privacy risks stemming from step count data.
We propose two attacks including attribute inference for gender, age and education and temporal linkability.
We believe our results can serve as a step stone for deriving a privacy-preserving ecosystem for wearable devices in the future.
- Score: 17.157398766367265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wearable devices have gained huge popularity in today's world. These devices
collect large-scale health data from their users, such as heart rate and step
count data, that is privacy sensitive, however it has not yet received the
necessary attention in the academia. In this paper, we perform the first
systematic study on quantifying privacy risks stemming from step count data. In
particular, we propose two attacks including attribute inference for gender,
age and education and temporal linkability. We demonstrate the severity of the
privacy attacks by performing extensive evaluation on a real life dataset and
derive key insights. We believe our results can serve as a step stone for
deriving a privacy-preserving ecosystem for wearable devices in the future.
Related papers
- Where you go is who you are -- A study on machine learning based
semantic privacy attacks [3.259843027596329]
We present a systematic analysis of two attack scenarios, namely location categorization and user profiling.
Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information.
Our findings point out the risks of ever-growing databases of tracking data and spatial context data.
arXiv Detail & Related papers (2023-10-26T17:56:50Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Slice it up: Unmasking User Identities in Smartwatch Health Data [1.4797368693230672]
We introduce a novel framework for similarity-based Dynamic Time Warping (DTW) re-identification attacks on time series health data.
Our attack is independent of training data and computes similarity rankings in about 2 minutes for 10,000 subjects on a single CPU core.
arXiv Detail & Related papers (2023-08-16T12:14:50Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Evaluating Privacy-Preserving Machine Learning in Critical
Infrastructures: A Case Study on Time-Series Classification [5.607917328636864]
It is pivotal to ensure that neither the model nor the data can be used to extract sensitive information.
Various safety-critical use cases (mostly relying on time-series data) are currently underrepresented in privacy-related considerations.
By evaluating several privacy-preserving methods regarding their applicability on time-series data, we validated the inefficacy of encryption for deep learning.
arXiv Detail & Related papers (2021-11-29T12:28:22Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Security and Privacy Preserving Deep Learning [2.322461721824713]
Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
arXiv Detail & Related papers (2020-06-23T01:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.