Reinforcement learning for the privacy preservation and manipulation of
eye tracking data
- URL: http://arxiv.org/abs/2002.06806v2
- Date: Fri, 2 Oct 2020 06:41:49 GMT
- Title: Reinforcement learning for the privacy preservation and manipulation of
eye tracking data
- Authors: Wolfgang Fuhl, Efe Bozkir, Enkelejda Kasneci
- Abstract summary: We present an approach based on reinforcement learning for eye tracking data manipulation.
We show that our approach is successfully applicable to preserve the privacy of the subjects.
- Score: 12.486057928762898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present an approach based on reinforcement learning for eye
tracking data manipulation. It is based on two opposing agents, where one tries
to classify the data correctly and the second agent looks for patterns in the
data, which get manipulated to hide specific information. We show that our
approach is successfully applicable to preserve the privacy of the subjects.
For this purpose, we evaluate our approach iteratively to showcase the behavior
of the reinforcement learning based approach. In addition, we evaluate the
importance of temporal, as well as spatial, information of eye tracking data
for specific classification goals. In the last part of our evaluation, we apply
the procedure to further public data sets without re-training the autoencoder
or the data manipulator. The results show that the learned manipulation is
generalized and applicable to unseen data as well.
Related papers
- ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - PARTICUL: Part Identification with Confidence measure using Unsupervised
Learning [0.0]
PARTICUL is a novel algorithm for unsupervised learning of part detectors from datasets used in fine-grained recognition.
It exploits the macro-similarities of all images in the training set in order to mine for recurring patterns in the feature space of a pre-trained convolutional neural network.
We show that our detectors can consistently highlight parts of the object while providing a good measure of the confidence in their prediction.
arXiv Detail & Related papers (2022-06-27T13:44:49Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.