PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure
Multi-Party Computation
- URL: http://arxiv.org/abs/2402.18970v1
- Date: Thu, 29 Feb 2024 09:19:06 GMT
- Title: PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure
Multi-Party Computation
- Authors: Mayar Elfares, Pascal Reisert, Zhiming Hu, Wenwu Tang, Ralf K\"usters,
Andreas Bulling
- Abstract summary: PrivatEyes is a privacy-enhancing approach for appearance-based gaze estimation based on federated learning (FL) and secure multi-party computation (MPC)
PrivatEyes enables training gaze estimators on multiple local datasets across different users and server-based secure aggregation of the individual estimators' updates.
New data leakage attack DualView shows that PrivatEyes limits the leakage of private training data more effectively than previous approaches.
- Score: 10.50795947657397
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Latest gaze estimation methods require large-scale training data but their
collection and exchange pose significant privacy risks. We propose PrivatEyes -
the first privacy-enhancing training approach for appearance-based gaze
estimation based on federated learning (FL) and secure multi-party computation
(MPC). PrivatEyes enables training gaze estimators on multiple local datasets
across different users and server-based secure aggregation of the individual
estimators' updates. PrivatEyes guarantees that individual gaze data remains
private even if a majority of the aggregating servers is malicious. We also
introduce a new data leakage attack DualView that shows that PrivatEyes limits
the leakage of private training data more effectively than previous approaches.
Evaluations on the MPIIGaze, MPIIFaceGaze, GazeCapture, and NVGaze datasets
further show that the improved privacy does not lead to a lower gaze estimation
accuracy or substantially higher computational costs - both of which are on par
with its non-secure counterparts.
Related papers
- Gaze3P: Gaze-Based Prediction of User-Perceived Privacy [13.5969071961191]
We introduce Gaze3P -- the first dataset specifically designed to facilitate investigations into user-perceived privacy.<n>Our dataset comprises gaze data from 100 participants and 1,000 stimuli, encompassing a range of private and safe attributes.<n>With Gaze3P, we train a machine learning model to implicitly and dynamically predict perceived privacy from human eye gaze.
arXiv Detail & Related papers (2025-07-01T09:26:38Z) - QualitEye: Public and Privacy-preserving Gaze Data Quality Verification [13.5969071961191]
QualitEye is the first method for verifying image-based gaze data quality.<n>It employs a new semantic representation of eye images that contains the information required for verification.<n>We evaluate QualitEye on the MPIIFaceGaze and GazeCapture datasets and achieve a high verification performance.
arXiv Detail & Related papers (2025-06-06T09:27:04Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)
FedE4RAG facilitates collaborative training of client-side RAG retrieval models.
We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services [32.15755914718391]
PrivateGaze is a first approach that can effectively preserve users' privacy in black-box gaze tracking services.
We propose a novel framework to train a privacy preserver that converts full-face images into obfuscated counterparts.
We show that the obfuscated image can protect users' private information, such as identity and gender, against unauthorized classification.
arXiv Detail & Related papers (2024-08-01T23:11:03Z) - Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework [6.828884629694705]
This article proposes the conceptual model called PrivChatGPT, a privacy-generative model for LLMs.
PrivChatGPT consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data.
arXiv Detail & Related papers (2023-10-19T06:55:13Z) - Unlocking Accuracy and Fairness in Differentially Private Image
Classification [43.53494043189235]
Differential privacy (DP) is considered the gold standard framework for privacy-preserving training.
We show that pre-trained foundation models fine-tuned with DP can achieve similar accuracy to non-private classifiers.
arXiv Detail & Related papers (2023-08-21T17:42:33Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning [11.852346300577494]
Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.
Recent works demonstrate that information exchanged during FL is subject to gradient-based privacy attacks.
We propose $textscFedCG$, a novel federated learning method that leverages conditional generative adversarial networks to achieve high-level privacy protection.
arXiv Detail & Related papers (2021-11-16T03:20:37Z) - PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting [3.822543555265593]
This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
arXiv Detail & Related papers (2021-02-19T05:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.