QualitEye: Public and Privacy-preserving Gaze Data Quality Verification
- URL: http://arxiv.org/abs/2506.05908v1
- Date: Fri, 06 Jun 2025 09:27:04 GMT
- Title: QualitEye: Public and Privacy-preserving Gaze Data Quality Verification
- Authors: Mayar Elfares, Pascal Reisert, Ralf Küsters, Andreas Bulling,
- Abstract summary: QualitEye is the first method for verifying image-based gaze data quality.<n>It employs a new semantic representation of eye images that contains the information required for verification.<n>We evaluate QualitEye on the MPIIFaceGaze and GazeCapture datasets and achieve a high verification performance.
- Score: 13.5969071961191
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gaze-based applications are increasingly advancing with the availability of large datasets but ensuring data quality presents a substantial challenge when collecting data at scale. It further requires different parties to collaborate, therefore, privacy concerns arise. We propose QualitEye--the first method for verifying image-based gaze data quality. QualitEye employs a new semantic representation of eye images that contains the information required for verification while excluding irrelevant information for better domain adaptation. QualitEye covers a public setting where parties can freely exchange data and a privacy-preserving setting where parties cannot reveal their raw data nor derive gaze features/labels of others with adapted private set intersection protocols. We evaluate QualitEye on the MPIIFaceGaze and GazeCapture datasets and achieve a high verification performance (with a small overhead in runtime for privacy-preserving versions). Hence, QualitEye paves the way for new gaze analysis methods at the intersection of machine learning, human-computer interaction, and cryptography.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - SHAN: Object-Level Privacy Detection via Inference on Scene Heterogeneous Graph [5.050631286347773]
Privacy object detection aims to accurately locate private objects in images.
Existing methods suffer from serious deficiencies in accuracy, generalization, and interpretability.
We propose SHAN, Scene Heterogeneous graph Attention Network, a model constructs a scene heterogeneous graph from an image.
arXiv Detail & Related papers (2024-03-14T08:32:14Z) - PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure
Multi-Party Computation [10.50795947657397]
PrivatEyes is a privacy-enhancing approach for appearance-based gaze estimation based on federated learning (FL) and secure multi-party computation (MPC)
PrivatEyes enables training gaze estimators on multiple local datasets across different users and server-based secure aggregation of the individual estimators' updates.
New data leakage attack DualView shows that PrivatEyes limits the leakage of private training data more effectively than previous approaches.
arXiv Detail & Related papers (2024-02-29T09:19:06Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Privacy-preserving Object Detection [52.77024349608834]
We show that for object detection on COCO, both anonymizing the dataset by blurring faces, as well as swapping faces in a balanced manner along the gender and skin tone dimension, can retain object detection performances while preserving privacy and partially balancing bias.
arXiv Detail & Related papers (2021-03-11T10:34:54Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z) - Differential Privacy for Eye Tracking with Temporal Correlations [30.44437258959343]
New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking.
Since eye movement properties contain biometric information, privacy concerns have to be handled properly.
We propose a novel transform-coding based differential privacy mechanism to further adapt it to the statistics of eye movement feature data.
arXiv Detail & Related papers (2020-02-20T19:01:34Z) - Privacy-Preserving Image Classification in the Local Setting [17.375582978294105]
Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing.
In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input.
We propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size.
arXiv Detail & Related papers (2020-02-09T01:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.