Personalized Visualization Recommendation
- URL: http://arxiv.org/abs/2102.06343v1
- Date: Fri, 12 Feb 2021 04:06:34 GMT
- Title: Personalized Visualization Recommendation
- Authors: Xin Qian, Ryan A. Rossi, Fan Du, Sungchul Kim, Eunyee Koh, Sana Malik,
Tak Yeon Lee, Nesreen K. Ahmed
- Abstract summary: We introduce the problem of personalized visualization recommendation and present a generic learning framework for solving it.
In particular, we focus on recommending visualizations personalized for each individual user based on their past visualization interactions.
We release our user-centric visualization corpus consisting of 17.4k users exploring 94k datasets with 2.3 million attributes and 32k user-generated visualizations.
- Score: 40.838444709402694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visualization recommendation work has focused solely on scoring
visualizations based on the underlying dataset and not the actual user and
their past visualization feedback. These systems recommend the same
visualizations for every user, despite that the underlying user interests,
intent, and visualization preferences are likely to be fundamentally different,
yet vitally important. In this work, we formally introduce the problem of
personalized visualization recommendation and present a generic learning
framework for solving it. In particular, we focus on recommending
visualizations personalized for each individual user based on their past
visualization interactions (e.g., viewed, clicked, manually created) along with
the data from those visualizations. More importantly, the framework can learn
from visualizations relevant to other users, even if the visualizations are
generated from completely different datasets. Experiments demonstrate the
effectiveness of the approach as it leads to higher quality visualization
recommendations tailored to the specific user intent and preferences. To
support research on this new problem, we release our user-centric visualization
corpus consisting of 17.4k users exploring 94k datasets with 2.3 million
attributes and 32k user-generated visualizations.
Related papers
- VisualLens: Personalization through Visual History [32.938501645752126]
We propose a novel approach, VisualLens, that extracts, filters, and refines image representations, and leverages these signals for personalization.
Our approach paves the way for personalized recommendations in scenarios where traditional methods fail.
arXiv Detail & Related papers (2024-11-25T01:45:42Z) - Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text [59.68239795065175]
We conduct a user study where users are shown a question and asked what they would prefer to see.
We use the data to establish that a user's personal traits does influence the data outputs that they prefer.
arXiv Detail & Related papers (2024-11-12T00:24:31Z) - Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [85.73149096516543]
We address the choice of viewpoint during sketch creation in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
A pilot study highlights the system's struggle when query-sketches differ in viewpoint from target instances.
To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks.
arXiv Detail & Related papers (2024-07-01T21:20:44Z) - Learning User Embeddings from Human Gaze for Personalised Saliency Prediction [12.361829928359136]
We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps.
At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users.
arXiv Detail & Related papers (2024-03-20T14:58:40Z) - Semantic Snapping for Guided Multi-View Visualization Design [6.8323414329956265]
We present semantic snapping, an approach to help non-expert users design effective multi-view visualizations.
Our method uses an on-the-fly procedure to detect and suggest resolutions for conflicting, misleading, or ambiguous designs.
arXiv Detail & Related papers (2021-09-17T07:40:56Z) - Insight-centric Visualization Recommendation [47.690901962177996]
We introduce a novel class of visualization recommendation systems that automatically rank and recommend both groups of related insights as well as the most important insights within each group.
A key advantage is that this approach generalizes to a wide variety of attribute types such as categorical, numerical, and temporal, as well as complex non-trivial combinations of these different attribute types.
We conducted a user study with 12 participants and two datasets which showed that users are able to quickly understand and find relevant insights in unfamiliar data.
arXiv Detail & Related papers (2021-03-21T03:30:22Z) - ML-based Visualization Recommendation: Learning to Recommend
Visualizations from Data [44.90479301447387]
visualization recommendation seeks to generate, score, and recommend to users useful visualizations automatically.
We propose the first end-to-end ML-based visualization recommendation system that takes as input a large corpus of datasets and visualizations.
We show that our end-to-end ML-based system recommends more effective and useful visualizations compared to existing state-of-the-art rule-based systems.
arXiv Detail & Related papers (2020-09-25T16:13:29Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.