Privacy-Preserving Eye-tracking Using Deep Learning
- URL: http://arxiv.org/abs/2106.09621v1
- Date: Thu, 17 Jun 2021 15:58:01 GMT
- Title: Privacy-Preserving Eye-tracking Using Deep Learning
- Authors: Salman Seyedi, Zifan Jiang, Allan Levey, Gari D. Clifford
- Abstract summary: In this study, we focus on the case of a deep network model trained on images of individual faces.
In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence.
- Score: 1.5484595752241124
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The expanding usage of complex machine learning methods like deep learning
has led to an explosion in human activity recognition, particularly applied to
health. In particular, as part of a larger body sensor network system, face and
full-body analysis is becoming increasingly common for evaluating health
status. However, complex models which handle private and sometimes protected
data, raise concerns about the potential leak of identifiable data. In this
work, we focus on the case of a deep network model trained on images of
individual faces. Full-face video recordings taken from 493 individuals
undergoing an eye-tracking based evaluation of neurological function were used.
Outputs, gradients, intermediate layer outputs, loss, and labels were used as
inputs for a deep network with an added support vector machine emission layer
to recognize membership in the training data. The inference attack method and
associated mathematical analysis indicate that there is a low likelihood of
unintended memorization of facial features in the deep learning model. In this
study, it is showed that the named model preserves the integrity of training
data with reasonable confidence. The same process can be implemented in similar
conditions for different models.
Related papers
- Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting [4.220336689294245]
Recent studies have presented various machine unlearning algorithms to make a trained model unlearn the data to be forgotten.
We propose Distribution-Level Feature Distancing (DLFD), a novel method that efficiently forgets instances while preventing correlation collapse.
Our method synthesizes data samples so that the generated data distribution is far from the distribution of samples being forgotten in the feature space.
arXiv Detail & Related papers (2024-09-23T06:51:10Z) - Deep Variational Privacy Funnel: General Modeling with Applications in
Face Recognition [3.351714665243138]
We develop a method for privacy-preserving representation learning using an end-to-end training framework.
We apply our model to state-of-the-art face recognition systems.
arXiv Detail & Related papers (2024-01-26T11:32:53Z) - Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements [0.0]
Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
arXiv Detail & Related papers (2022-06-21T17:59:23Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Beyond Tracking: Using Deep Learning to Discover Novel Interactions in
Biological Swarms [3.441021278275805]
We propose training deep network models to predict system-level states directly from generic graphical features from the entire view.
Because the resulting predictive models are not based on human-understood predictors, we use explanatory modules.
This represents an example of augmented intelligence in behavioral ecology -- knowledge co-creation in a human-AI team.
arXiv Detail & Related papers (2021-08-20T22:50:41Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Membership Inference Attacks on Deep Regression Models for Neuroimaging [15.591129844038269]
We show realistic Membership Inference attacks on deep learning models trained for 3D neuroimaging tasks in a centralized as well as decentralized setup.
We correctly identified whether an MRI scan was used in model training with a 60% to over 80% success rate depending on model complexity and security assumptions.
arXiv Detail & Related papers (2021-05-06T17:51:06Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.