A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics
- URL: http://arxiv.org/abs/2208.05370v1
- Date: Wed, 10 Aug 2022 14:38:13 GMT
- Title: A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics
- Authors: Roberta Mota, Nivan Ferreira, Julio Daniel Silva, Marius Horga, Marcos
Lage, Luis Ceferino, Usman Alim, Ehud Sharlin, Fabio Miranda
- Abstract summary: This paper investigates how effective 3D urban visual analytics are at supportingtemporal analysis on building surfaces.
We compare four representative visual designs used to visualize 3Dtemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view.
Our results demonstrate that participants were more accurate using plot-based visualizations but faster using colorcoded visualizations.
- Score: 7.157706457130007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent technological innovations have led to an increase in the availability
of 3D urban data, such as shadow, noise, solar potential, and earthquake
simulations. These spatiotemporal datasets create opportunities for new
visualizations to engage experts from different domains to study the dynamic
behavior of urban spaces in this under explored dimension. However, designing
3D spatiotemporal urban visualizations is challenging, as it requires visual
strategies to support analysis of time-varying data referent to the city
geometry. Although different visual strategies have been used in 3D urban
visual analytics, the question of how effective these visual designs are at
supporting spatiotemporal analysis on building surfaces remains open. To
investigate this, in this paper we first contribute a series of analytical
tasks elicited after interviews with practitioners from three urban domains. We
also contribute a quantitative user study comparing the effectiveness of four
representative visual designs used to visualize 3D spatiotemporal urban data:
spatial juxtaposition, temporal juxtaposition, linked view, and embedded view.
Participants performed a series of tasks that required them to identify extreme
values on building surfaces over time. Tasks varied in granularity for both
space and time dimensions. Our results demonstrate that participants were more
accurate using plot-based visualizations (linked view, embedded view) but
faster using color-coded visualizations (spatial juxtaposition, temporal
juxtaposition). Our results also show that, with increasing task complexity,
plot-based visualizations perform better in preserving efficiency (time,
accuracy) compared to color-coded visualizations. Based on our findings, we
present a set of takeaways with design recommendations for 3D spatiotemporal
urban visualizations for researchers and practitioners.
Related papers
- Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - The State of the Art in Visual Analytics for 3D Urban Data [5.056350278679641]
Urbanization has amplified the importance of three-dimensional structures in urban environments.
With the growing availability of 3D urban data, numerous studies have focused on developing visual analysis techniques tailored to the unique characteristics of urban environments.
incorporating the third dimension into visual analytics introduces additional challenges in designing effective visual tools to tackle urban data's diverse complexities.
arXiv Detail & Related papers (2024-04-24T16:50:42Z) - HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting [53.6394928681237]
holistic understanding of urban scenes based on RGB images is a challenging yet important problem.
Our main idea involves the joint optimization of geometry, appearance, semantics, and motion using a combination of static and dynamic 3D Gaussians.
Our approach offers the ability to render new viewpoints in real-time, yielding 2D and 3D semantic information with high accuracy.
arXiv Detail & Related papers (2024-03-19T13:39:05Z) - RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering
Assisted Distillation [50.35403070279804]
3D occupancy prediction is an emerging task that aims to estimate the occupancy states and semantics of 3D scenes using multi-view images.
We propose RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.
arXiv Detail & Related papers (2023-12-19T03:39:56Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - The Urban Toolkit: A Grammar-based Framework for Urban Visual Analytics [5.674216760436341]
The complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights.
When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers.
This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science.
arXiv Detail & Related papers (2023-08-15T13:43:04Z) - DeepVisualInsight: Time-Travelling Visualization for Spatio-Temporal
Causality of Deep Classification Training [7.4940788786485095]
We propose a time-travelling visual solution DeepVisualInsight aiming to manifest causality while training a deep learning image.
We show how gradient-descent sampling techniques can influence and reshape the layout of learnt input representation and the boundaries in consecutive epochs.
Our experiments show that, comparing to baseline approaches, we achieve the best visualization performance regarding the spatial/temporal properties and visualization efficiency.
arXiv Detail & Related papers (2021-12-31T07:05:31Z) - Spatio-temporal Self-Supervised Representation Learning for 3D Point
Clouds [96.9027094562957]
We introduce a-temporal representation learning framework, capable of learning from unlabeled tasks.
Inspired by how infants learn from visual data in the wild, we explore rich cues derived from the 3D data.
STRL takes two temporally-related frames from a 3D point cloud sequence as the input, transforms it with the spatial data augmentation, and learns the invariant representation self-supervisedly.
arXiv Detail & Related papers (2021-09-01T04:17:11Z) - Self-supervised Video Representation Learning by Uncovering
Spatio-temporal Statistics [74.6968179473212]
This paper proposes a novel pretext task to address the self-supervised learning problem.
We compute a series of partitioning-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion.
A neural network is built and trained to yield the statistical summaries given the video frames as inputs.
arXiv Detail & Related papers (2020-08-31T08:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.