Semantic Snapping for Guided Multi-View Visualization Design
- URL: http://arxiv.org/abs/2109.08384v1
- Date: Fri, 17 Sep 2021 07:40:56 GMT
- Title: Semantic Snapping for Guided Multi-View Visualization Design
- Authors: Yngve S. Kristiansen, Laura Garrison and Stefan Bruckner
- Abstract summary: We present semantic snapping, an approach to help non-expert users design effective multi-view visualizations.
Our method uses an on-the-fly procedure to detect and suggest resolutions for conflicting, misleading, or ambiguous designs.
- Score: 6.8323414329956265
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Visual information displays are typically composed of multiple visualizations
that are used to facilitate an understanding of the underlying data. A common
example are dashboards, which are frequently used in domains such as finance,
process monitoring and business intelligence. However, users may not be aware
of existing guidelines and lack expert design knowledge when composing such
multi-view visualizations. In this paper, we present semantic snapping, an
approach to help non-expert users design effective multi-view visualizations
from sets of pre-existing views. When a particular view is placed on a canvas,
it is "aligned" with the remaining views -- not with respect to its geometric
layout, but based on aspects of the visual encoding itself, such as how data
dimensions are mapped to channels. Our method uses an on-the-fly procedure to
detect and suggest resolutions for conflicting, misleading, or ambiguous
designs, as well as to provide suggestions for alternative presentations. With
this approach, users can be guided to avoid common pitfalls encountered when
composing visualizations. Our provided examples and case studies demonstrate
the usefulness and validity of our approach.
Related papers
- Visualizing Extensions of Argumentation Frameworks as Layered Graphs [15.793271603711014]
We introduce a new visualization technique that draws an AF, together with an extension, as a 3-layer graph layout.
Our technique supports the user to more easily explore the visualized AF, better understand extensions, and verify algorithms for computing semantics.
arXiv Detail & Related papers (2024-09-09T09:29:53Z) - Beyond Mask: Rethinking Guidance Types in Few-shot Segmentation [67.35274834837064]
We develop a universal vision-language framework (UniFSS) to integrate prompts from text, mask, box, and image.
UniFSS significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-16T08:41:01Z) - Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [85.73149096516543]
We address the choice of viewpoint during sketch creation in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
A pilot study highlights the system's struggle when query-sketches differ in viewpoint from target instances.
To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks.
arXiv Detail & Related papers (2024-07-01T21:20:44Z) - VERA: Generating Visual Explanations of Two-Dimensional Embeddings via Region Annotation [0.0]
Visual Explanations via Region (VERA) is an automatic embedding-annotation approach that generates visual explanations for any two-dimensional embedding.
VERA produces informative explanations that characterize distinct regions in the embedding space, allowing users to gain an overview of the embedding landscape at a glance.
We illustrate the usage of VERA on a real-world data set and validate the utility of our approach with a comparative user study.
arXiv Detail & Related papers (2024-06-07T10:23:03Z) - POV: Prompt-Oriented View-Agnostic Learning for Egocentric Hand-Object
Interaction in the Multi-View World [59.545114016224254]
Humans are good at translating third-person observations of hand-object interactions into an egocentric view.
We propose a Prompt-Oriented View-agnostic learning framework, which enables this view adaptation with few egocentric videos.
arXiv Detail & Related papers (2024-03-09T09:54:44Z) - Visual Concept-driven Image Generation with Text-to-Image Diffusion Model [65.96212844602866]
Text-to-image (TTI) models have demonstrated impressive results in generating high-resolution images of complex scenes.
Recent approaches have extended these methods with personalization techniques that allow them to integrate user-illustrated concepts.
However, the ability to generate images with multiple interacting concepts, such as human subjects, as well as concepts that may be entangled in one, or across multiple, image illustrations remains illusive.
We propose a concept-driven TTI personalization framework that addresses these core challenges.
arXiv Detail & Related papers (2024-02-18T07:28:37Z) - Leveraging Open-Vocabulary Diffusion to Camouflaged Instance
Segmentation [59.78520153338878]
Text-to-image diffusion techniques have shown exceptional capability of producing high-quality images from text descriptions.
We propose a method built upon a state-of-the-art diffusion model, empowered by open-vocabulary to learn multi-scale textual-visual features for camouflaged object representations.
arXiv Detail & Related papers (2023-12-29T07:59:07Z) - Multispectral Contrastive Learning with Viewmaker Networks [8.635434871127512]
We focus on applying contrastive learning approaches to a variety of remote sensing datasets.
We show that Viewmaker networks are promising for producing views in this setting without requiring extensive domain knowledge and trial and error.
arXiv Detail & Related papers (2023-02-11T18:44:12Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Generalized Multi-view Shared Subspace Learning using View Bootstrapping [43.027427742165095]
Key objective in multi-view learning is to model the information common to multiple parallel views of a class of objects/events to improve downstream learning tasks.
We present a neural method based on multi-view correlation to capture the information shared across a large number of views by subsampling them in a view-agnostic manner during training.
Experiments on spoken word recognition, 3D object classification and pose-invariant face recognition demonstrate the robustness of view bootstrapping to model a large number of views.
arXiv Detail & Related papers (2020-05-12T20:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.