Privacy-Preserving Learning of Human Activity Predictors in Smart
Environments
- URL: http://arxiv.org/abs/2101.06564v1
- Date: Sun, 17 Jan 2021 01:04:53 GMT
- Title: Privacy-Preserving Learning of Human Activity Predictors in Smart
Environments
- Authors: Sharare Zehtabian, Siavash Khodadadeh, Ladislau B\"ol\"oni and Damla
Turgut
- Abstract summary: We use state-of-the-art deep neural network-based techniques to learn predictive human activity models.
A novel aspect of our work is that we carefully track the temporal evolution of the data available to the learner and the data shared by the user.
- Score: 5.981641988736108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The daily activities performed by a disabled or elderly person can be
monitored by a smart environment, and the acquired data can be used to learn a
predictive model of user behavior. To speed up the learning, several
researchers designed collaborative learning systems that use data from multiple
users. However, disclosing the daily activities of an elderly or disabled user
raises privacy concerns. In this paper, we use state-of-the-art deep neural
network-based techniques to learn predictive human activity models in the
local, centralized, and federated learning settings. A novel aspect of our work
is that we carefully track the temporal evolution of the data available to the
learner and the data shared by the user. In contrast to previous work where
users shared all their data with the centralized learner, we consider users
that aim to preserve their privacy. Thus, they choose between approaches in
order to achieve their goals of predictive accuracy while minimizing the shared
data. To help users make decisions before disclosing any data, we use machine
learning to predict the degree to which a user would benefit from collaborative
learning. We validate our approaches on real-world data.
Related papers
- Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Free Lunch for Privacy Preserving Distributed Graph Learning [1.8292714902548342]
We present a novel privacy-respecting framework for distributed graph learning and graph-based machine learning.
This framework aims to learn features as well as distances without requiring actual features while preserving the original structural properties of the raw data.
arXiv Detail & Related papers (2023-05-18T10:41:21Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Opportunistic Federated Learning: An Exploration of Egocentric
Collaboration for Pervasive Computing Applications [20.61034787249924]
We define a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models.
In this paper, we explore the feasibility and limits of such an approach, culminating in a framework that supports encounter-based pairwise collaborative learning.
arXiv Detail & Related papers (2021-03-24T15:30:21Z) - Privacy Enhancing Machine Learning via Removal of Unwanted Dependencies [21.97951347784442]
This paper studies new variants of supervised and adversarial learning methods, which remove the sensitive information in the data before they are sent out for a particular application.
The explored methods optimize privacy preserving feature mappings and predictive models simultaneously in an end-to-end fashion.
Experimental results on mobile sensing and face datasets demonstrate that our models can successfully maintain the utility performances of predictive models while causing sensitive predictions to perform poorly.
arXiv Detail & Related papers (2020-07-30T19:55:10Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - A Review of Privacy-preserving Federated Learning for the
Internet-of-Things [3.3517146652431378]
This work reviews federated learning as an approach for performing machine learning on distributed data.
We aim to protect the privacy of user-generated data as well as reducing communication costs associated with data transfer.
We identify the strengths and weaknesses of different methods applied to federated learning.
arXiv Detail & Related papers (2020-04-24T15:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.