Discriminative Viewer Identification using Generative Models of Eye Gaze
- URL: http://arxiv.org/abs/2003.11399v1
- Date: Wed, 25 Mar 2020 13:33:18 GMT
- Title: Discriminative Viewer Identification using Generative Models of Eye Gaze
- Authors: Silvia Makowski, Lena A. J\"ager, Lisa Schwetlick, Hans Trukenbrod,
Ralf Engbert, Tobias Scheffer
- Abstract summary: We study the problem of identifying viewers of arbitrary images based on their eye gaze.
We derive Fisher kernels from different generative models of eye gaze.
Using an SVM with Fisher kernel improves the classification performance over the underlying generative model.
- Score: 0.13701366534590495
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the problem of identifying viewers of arbitrary images based on
their eye gaze. Psychological research has derived generative stochastic models
of eye movements. In order to exploit this background knowledge within a
discriminatively trained classification model, we derive Fisher kernels from
different generative models of eye gaze. Experimentally, we find that the
performance of the classifier strongly depends on the underlying generative
model. Using an SVM with Fisher kernel improves the classification performance
over the underlying generative model.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - GazeFusion: Saliency-Guided Image Generation [50.37783903347613]
Diffusion models offer unprecedented image generation power given just a text prompt.
We present a saliency-guided framework to incorporate the data priors of human visual attention mechanisms into the generation process.
arXiv Detail & Related papers (2024-03-16T21:01:35Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - Intriguing properties of generative classifiers [14.57861413242093]
We build on advances in generative modeling that turn text-to-image models into classifiers.
They show a record-breaking human-like shape bias (99% for Imagen), near human-level out-of-distribution accuracy, state-of-the-art alignment with human classification errors.
Our results indicate that while the current dominant paradigm for modeling human object recognition is discriminative inference, zero-shot generative models approximate human object recognition data surprisingly well.
arXiv Detail & Related papers (2023-09-28T18:19:40Z) - Diffusion Models Beat GANs on Image Classification [37.70821298392606]
Diffusion models have risen to prominence as a state-of-the-art method for image generation, denoising, inpainting, super-resolution, manipulation, etc.
We present our findings that these embeddings are useful beyond the noise prediction task, as they contain discriminative information and can also be leveraged for classification.
We find that with careful feature selection and pooling, diffusion models outperform comparable generative-discriminative methods for classification tasks.
arXiv Detail & Related papers (2023-07-17T17:59:40Z) - Futuristic Variations and Analysis in Fundus Images Corresponding to
Biological Traits [5.0329748402255365]
This study uses the cutting-edge deep learning algorithms to estimate biological traits in terms of age and gender together with associating traits to retinal visuals.
For the traits association, our study embeds aging as the label information into the proposed DL model to learn knowledge about the effected regions with aging.
Our study analyzes fundus images and their corresponding association with biological traits, and predicts of possible spreading of ocular disease on fundus images given age as condition to the generative model.
arXiv Detail & Related papers (2023-02-08T02:17:22Z) - Eye Gaze Estimation Model Analysis [2.4366811507669124]
We discuss various model types for eye gaze estimation and present the results from predicting gaze direction using eye landmarks in unconstrained settings.
In unconstrained real-world settings, feature-based and model-based methods are outperformed by recent appearance-based methods due to factors like illumination changes and other visual artifacts.
arXiv Detail & Related papers (2022-07-28T20:40:03Z) - Learning Robust Representations Of Generative Models Using Set-Based
Artificial Fingerprints [14.191129493685212]
Existing methods approximate the distance between the models via their sample distributions.
We consider unique traces (a.k.a. "artificial fingerprints") as representations of generative models.
We propose a new learning method based on set-encoding and contrastive training.
arXiv Detail & Related papers (2022-06-04T23:20:07Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Bayesian Eye Tracking [63.21413628808946]
Model-based eye tracking is susceptible to eye feature detection errors.
We propose a Bayesian framework for model-based eye tracking.
Compared to state-of-the-art model-based and learning-based methods, the proposed framework demonstrates significant improvement in generalization capability.
arXiv Detail & Related papers (2021-06-25T02:08:03Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.