Learning User Embeddings from Human Gaze for Personalised Saliency Prediction
- URL: http://arxiv.org/abs/2403.13653v2
- Date: Tue, 26 Mar 2024 08:45:09 GMT
- Title: Learning User Embeddings from Human Gaze for Personalised Saliency Prediction
- Authors: Florian Strohm, Mihai Bâce, Andreas Bulling,
- Abstract summary: We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps.
At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users.
- Score: 12.361829928359136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reusable embeddings of user behaviour have shown significant performance improvements for the personalised saliency prediction task. However, prior works require explicit user characteristics and preferences as input, which are often difficult to obtain. We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps generated from a small amount of user-specific eye tracking data. At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users. Evaluations on two public saliency datasets show that the generated embeddings have high discriminative power, are effective at refining universal saliency maps to the individual users, and generalise well across users and images. Finally, based on our model's ability to encode individual user characteristics, our work points towards other applications that can benefit from reusable embeddings of gaze behaviour.
Related papers
- ViPer: Visual Personalization of Generative Models via Individual Preference Learning [11.909247529297678]
We propose to personalize the image generation process by capturing the generic preferences of the user in a one-time process.
Based on these comments, we infer a user's structured liked and disliked visual attributes.
These attributes are used to guide a text-to-image model toward producing images that are tuned towards the individual user's visual preference.
arXiv Detail & Related papers (2024-07-24T15:42:34Z) - Cross-domain Transfer of Valence Preferences via a Meta-optimization Approach [17.545983294377958]
CVPM formalizes cross-domain interest transfer as a hybrid architecture of meta-learning and self-supervised learning.
With deep insights into user preferences, we employ differentiated encoders to learn their distributions.
In particular, we treat each user's mapping as two parts, the common transformation and the personalized bias, where the network used to generate the personalized bias is output by a meta-learner.
arXiv Detail & Related papers (2024-06-24T10:02:24Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Show Me What I Like: Detecting User-Specific Video Highlights Using
Content-Based Multi-Head Attention [58.44096082508686]
We propose a method to detect individualized highlights for users on given target videos based on their preferred highlight clips marked on previous videos they have watched.
Our method explicitly leverages the contents of both the preferred clips and the target videos using pre-trained features for the objects and the human activities.
arXiv Detail & Related papers (2022-07-18T02:32:48Z) - FaIRCoP: Facial Image Retrieval using Contrastive Personalization [43.293482565385055]
Retrieving facial images from attributes plays a vital role in various systems such as face recognition and suspect identification.
Existing methods do so by comparing specific characteristics from the user's mental image against the suggested images.
We propose a method that uses the user's feedback to label images as either similar or dissimilar to the target image.
arXiv Detail & Related papers (2022-05-28T09:52:09Z) - UserIdentifier: Implicit User Representations for Simple and Effective
Personalized Sentiment Analysis [36.162520010250056]
We propose UserIdentifier, a novel scheme for training a single shared model for all users.
Our approach produces personalized responses by adding fixed, non-trainable user identifiers to the input data.
arXiv Detail & Related papers (2021-10-01T00:21:33Z) - Personalized Visualization Recommendation [40.838444709402694]
We introduce the problem of personalized visualization recommendation and present a generic learning framework for solving it.
In particular, we focus on recommending visualizations personalized for each individual user based on their past visualization interactions.
We release our user-centric visualization corpus consisting of 17.4k users exploring 94k datasets with 2.3 million attributes and 32k user-generated visualizations.
arXiv Detail & Related papers (2021-02-12T04:06:34Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - Adversarial Learning for Personalized Tag Recommendation [61.76193196463919]
We propose an end-to-end deep network which can be trained on large-scale datasets.
A joint training of user-preference and visual encoding allows the network to efficiently integrate the visual preference with tagging behavior.
We demonstrate the effectiveness of the proposed model on two different large-scale and publicly available datasets.
arXiv Detail & Related papers (2020-04-01T20:41:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.