Vulnerability of Appearance-based Gaze Estimation
- URL: http://arxiv.org/abs/2103.13134v1
- Date: Wed, 24 Mar 2021 12:19:59 GMT
- Title: Vulnerability of Appearance-based Gaze Estimation
- Authors: Mingjie Xu, Haofei Wang, Yunfei Liu, Feng Lu
- Abstract summary: Appearance-based gaze estimation has achieved significant improvement by using deep learning.
In this paper, we investigate the vulnerability of appearance-based gaze estimation.
We show that the CA-Net shows superior performance against attack among the four popular appearance-based gaze estimation networks.
- Score: 16.0559382707645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Appearance-based gaze estimation has achieved significant improvement by
using deep learning. However, many deep learning-based methods suffer from the
vulnerability property, i.e., perturbing the raw image using noise confuses the
gaze estimation models. Although the perturbed image visually looks similar to
the original image, the gaze estimation models output the wrong gaze direction.
In this paper, we investigate the vulnerability of appearance-based gaze
estimation. To our knowledge, this is the first time that the vulnerability of
gaze estimation to be found. We systematically characterized the vulnerability
property from multiple aspects, the pixel-based adversarial attack, the
patch-based adversarial attack and the defense strategy. Our experimental
results demonstrate that the CA-Net shows superior performance against attack
among the four popular appearance-based gaze estimation networks, Full-Face,
Gaze-Net, CA-Net and RT-GENE. This study draws the attention of researchers in
the appearance-based gaze estimation community to defense from adversarial
attacks.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Gazing Into Missteps: Leveraging Eye-Gaze for Unsupervised Mistake Detection in Egocentric Videos of Skilled Human Activities [25.049754180292034]
We address the challenge of unsupervised mistake detection in egocentric video through the analysis of gaze signals.
Based on the observation that eye movements closely follow object manipulation activities, we assess to what extent eye-gaze signals can support mistake detection.
Inconsistencies between predicted and observed gaze trajectories act as an indicator to identify mistakes.
arXiv Detail & Related papers (2024-06-12T16:29:45Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Semi-supervised Contrastive Regression for Estimation of Eye Gaze [0.609170287691728]
This paper develops a semi-supervised contrastive learning framework for estimation of gaze direction.
With a small labeled gaze dataset, the framework is able to find a generalized solution even for unseen face images.
Our contrastive regression framework shows good performance in comparison to several state of the art contrastive learning techniques used for gaze estimation.
arXiv Detail & Related papers (2023-08-05T04:11:38Z) - Contrastive Weighted Learning for Near-Infrared Gaze Estimation [0.228438857884398]
This paper proposes GazeCWL, a novel framework for gaze estimation with near-infrared images using contrastive learning.
Our model outperforms previous domain generalization models in infrared image based gaze estimation.
arXiv Detail & Related papers (2022-11-06T10:03:23Z) - Jitter Does Matter: Adapting Gaze Estimation to New Domains [12.482427155726413]
We propose to utilize gaze jitter to analyze and optimize gaze domain adaptation task.
We find that the high-frequency component (HFC) is an important factor that leads to jitter.
We employ contrastive learning to encourage the model to obtain similar representations between original and perturbed data.
arXiv Detail & Related papers (2022-10-05T08:20:41Z) - LatentGaze: Cross-Domain Gaze Estimation through Gaze-Aware Analytic
Latent Code Manipulation [0.0]
We propose a gaze-aware analytic manipulation method, based on a data-driven approach with generative adversarial network inversion's disentanglement characteristics.
By utilizing GAN-based encoder-generator process, we shift the input image from the target domain to the source domain image, which a gaze estimator is sufficiently aware.
arXiv Detail & Related papers (2022-09-21T08:05:53Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.