Collection Space Navigator: An Interactive Visualization Interface for
Multidimensional Datasets
- URL: http://arxiv.org/abs/2305.06809v1
- Date: Thu, 11 May 2023 14:03:26 GMT
- Title: Collection Space Navigator: An Interactive Visualization Interface for
Multidimensional Datasets
- Authors: Tillmann Ohm, Mar Canet Sol\`a, Andres Karjus, Maximilian Schich
- Abstract summary: Collection Space Navigator (CSN) is a browser-based visualization tool to explore, research, and curate large collections of visual digital artifacts.
CSN provides a customizable interface that combines two-dimensional projections with a set of multidimensional filters.
Users can reconfigure the interface to fit their own data and research needs, including projections and filter controls.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce the Collection Space Navigator (CSN), a browser-based
visualization tool to explore, research, and curate large collections of visual
digital artifacts that are associated with multidimensional data, such as
vector embeddings or tables of metadata. Media objects such as images are often
encoded as numerical vectors, for e.g. based on metadata or using machine
learning to embed image information. Yet, while such procedures are widespread
for a range of applications, it remains a challenge to explore, analyze, and
understand the resulting multidimensional spaces in a more comprehensive
manner. Dimensionality reduction techniques such as t-SNE or UMAP often serve
to project high-dimensional data into low dimensional visualizations, yet
require interpretation themselves as the remaining dimensions are typically
abstract. Here, the Collection Space Navigator provides a customizable
interface that combines two-dimensional projections with a set of configurable
multidimensional filters. As a result, the user is able to view and investigate
collections, by zooming and scaling, by transforming between projections, by
filtering dimensions via range sliders, and advanced text filters. Insights
that are gained during the interaction can be fed back into the original data
via ad hoc exports of filtered metadata and projections. This paper comes with
a functional showcase demo using a large digitized collection of classical
Western art. The Collection Space Navigator is open source. Users can
reconfigure the interface to fit their own data and research needs, including
projections and filter controls. The CSN is ready to serve a broad community.
Related papers
- VERA: Generating Visual Explanations of Two-Dimensional Embeddings via Region Annotation [0.0]
Visual Explanations via Region (VERA) is an automatic embedding-annotation approach that generates visual explanations for any two-dimensional embedding.
VERA produces informative explanations that characterize distinct regions in the embedding space, allowing users to gain an overview of the embedding landscape at a glance.
We illustrate the usage of VERA on a real-world data set and validate the utility of our approach with a comparative user study.
arXiv Detail & Related papers (2024-06-07T10:23:03Z) - Open-Vocabulary Camouflaged Object Segmentation [66.94945066779988]
We introduce a new task, open-vocabulary camouflaged object segmentation (OVCOS)
We construct a large-scale complex scene dataset (textbfOVCamo) containing 11,483 hand-selected images with fine annotations and corresponding object classes.
By integrating the guidance of class semantic knowledge and the supplement of visual structure cues from the edge and depth information, the proposed method can efficiently capture camouflaged objects.
arXiv Detail & Related papers (2023-11-19T06:00:39Z) - Multiview Transformer: Rethinking Spatial Information in Hyperspectral
Image Classification [43.17196501332728]
Identifying the land cover category for each pixel in a hyperspectral image relies on spectral and spatial information.
In this article, we investigate that scene-specific but not essential correlations may be recorded in an HSI cuboid.
We propose a multiview transformer for HSI classification, which consists of multiview principal component analysis (MPCA), spectral encoder-decoder (SED), and spatial-pooling tokenization transformer (SPTT)
arXiv Detail & Related papers (2023-10-11T04:25:24Z) - MMRDN: Consistent Representation for Multi-View Manipulation
Relationship Detection in Object-Stacked Scenes [62.20046129613934]
We propose a novel multi-view fusion framework, namely multi-view MRD network (MMRDN)
We project the 2D data from different views into a common hidden space and fit the embeddings with a set of Von-Mises-Fisher distributions.
We select a set of $K$ Maximum Vertical Neighbors (KMVN) points from the point cloud of each object pair, which encodes the relative position of these two objects.
arXiv Detail & Related papers (2023-04-25T05:55:29Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - ReViVD: Exploration and Filtering of Trajectories in an Immersive
Environment using 3D Shapes [3.308743964406687]
We present ReViVD, a tool for exploring and filtering large trajectory-based datasets using virtual reality.
ReViVD's novelty lies in using simple 3D shapes as queries for users to select and filter groups of trajectories.
We demonstrate the use of ReViVD in different application domains, from GPS position tracking to simulated data.
arXiv Detail & Related papers (2022-02-21T21:58:41Z) - UnProjection: Leveraging Inverse-Projections for Visual Analytics of
High-Dimensional Data [63.74032987144699]
We present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping.
NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system.
arXiv Detail & Related papers (2021-11-02T17:11:57Z) - Towards data-driven filters in Paraview [0.0]
We develop filters that expose the abilities of pre-trained machine learning models to the visualization system.
The filters transform the input data by feeding it into the model and then provide the model's output as input to the remaining visualization pipeline.
A series of simplistic use cases for segmentation and classification on image and fluid data is presented.
arXiv Detail & Related papers (2021-08-11T13:02:22Z) - OdoViz: A 3D Odometry Visualization and Processing Tool [0.0]
OdoViz is a reactive web-based tool for 3D visualization and processing of autonomous vehicle datasets.
The system includes functionality for loading, inspecting, visualizing, and processing GPS/INS poses, point clouds and camera images.
arXiv Detail & Related papers (2021-07-15T18:37:19Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Spatial Priming for Detecting Human-Object Interactions [89.22921959224396]
We present a method for exploiting spatial layout information for detecting human-object interactions (HOIs) in images.
The proposed method consists of a layout module which primes a visual module to predict the type of interaction between a human and an object.
The proposed model reaches an mAP of 24.79% for HICO-Det dataset which is about 2.8% absolute points higher than the current state-of-the-art.
arXiv Detail & Related papers (2020-04-09T23:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.