Semantic Zoom and Mini-Maps for Software Cities
- URL: http://arxiv.org/abs/2510.00003v1
- Date: Tue, 26 Aug 2025 12:49:29 GMT
- Title: Semantic Zoom and Mini-Maps for Software Cities
- Authors: Malte Hansen, Jens Bamberg, Noe Baumann, Wilhelm Hasselbring,
- Abstract summary: We present two approaches to address the challenge of visual scalability in 3D software cities.<n>First, we present an approach to semantic zoom, in which the graphical representation of the software landscape changes based on the virtual camera's distance from visual objects.<n>Second, we augment the visualization with a miniature two-dimensional top-view projection called mini-map.
- Score: 0.6999740786886536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software visualization tools can facilitate program comprehension by providing visual metaphors, or abstractions that reduce the amount of textual data that needs to be processed mentally. One way they do this is by enabling developers to build an internal representation of the visualized software and its architecture. However, as the amount of displayed data in the visualization increases, the visualization itself can become more difficult to comprehend. The ability to display small and large amounts of data in visualizations is called visual scalability. In this paper, we present two approaches to address the challenge of visual scalability in 3D software cities. First, we present an approach to semantic zoom, in which the graphical representation of the software landscape changes based on the virtual camera's distance from visual objects. Second, we augment the visualization with a miniature two-dimensional top-view projection called mini-map. We demonstrate our approach using an open-source implementation in our software visualization tool ExplorViz. ExplorViz is web-based and uses the 3D city metaphor, focusing on live trace visualization. We evaluated our approaches in two separate user studies. The results indicate that semantic zoom and the mini-map are both useful additions. User feedback indicates that semantic zoom and mini-maps are especially useful for large software landscapes and collaborative software exploration. The studies indicate a good usability of our implemented approaches. However, some shortcomings in our implementations have also been discovered, to be addressed in future work. Video URL: https://youtu.be/LYtUeWvizjU
Related papers
- Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs [80.2089647067782]
We introduce Latent Sketchpad, a framework that equips Multimodal Large Language Models with an internal visual scratchpad.<n>Inspired by how humans use sketching as a form of visual thinking to develop and communicate ideas, we introduce Latent Sketchpad.<n>We evaluate the framework on our new dataset MazePlanning.
arXiv Detail & Related papers (2025-10-28T15:26:20Z) - Visual Jigsaw Post-Training Improves MLLMs [58.29961336087896]
We introduce Visual Jigsaw, a generic self-supervised post-training framework designed to strengthen visual understanding in large language models (MLLMs)<n>Visual Jigsaw is formulated as a general ordering task: visual inputs are partitioned, shuffled, and the model must reconstruct the visual information by producing the correct permutation in natural language.<n>Extensive experiments demonstrate substantial improvements in fine-grained perception, temporal reasoning, and 3D spatial understanding.
arXiv Detail & Related papers (2025-09-29T17:59:57Z) - HTML Structure Exploration in 3D Software Cities [1.1470070927586018]
This paper introduces additions to the web-based live tracing software visualization tool ExplorViz.<n>We add an embedded web view for instrumented applications in the 3D visualization to ease interaction with the given applications.<n>The Document Object Model (DOM) is visualized via a three-dimensional representation of the HTML structure in same-origin contexts.
arXiv Detail & Related papers (2025-08-26T12:52:09Z) - Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models [127.38740043393527]
We propose ViFT, a visual instruction-free fine-tuning framework for LVLMs.<n>We only require the text-only instructions and image caption data during training, to separately learn the task-solving and visual perception abilities.<n> Experimental results demonstrate that ViFT can achieve state-of-the-art performance on several visual reasoning and visual instruction following benchmarks.
arXiv Detail & Related papers (2025-02-17T04:38:12Z) - Visualizing Extensions of Argumentation Frameworks as Layered Graphs [15.793271603711014]
We introduce a new visualization technique that draws an AF, together with an extension, as a 3-layer graph layout.
Our technique supports the user to more easily explore the visualized AF, better understand extensions, and verify algorithms for computing semantics.
arXiv Detail & Related papers (2024-09-09T09:29:53Z) - A Software Visualization Approach for Multiple Visual Output Devices [0.24466725954625887]
We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors.
Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances.
A preliminary study indicates that this environment can be useful for collaborative exploration of software cities.
arXiv Detail & Related papers (2024-09-04T11:27:47Z) - Visualizing Routes with AI-Discovered Street-View Patterns [4.153397474276339]
We propose a solution of using semantic latent vectors for quantifying visual appearance features.
We calculate image similarities among a large set of street-view images and then discover spatial imagery patterns.
We present VivaRoutes, an interactive visualization prototype, to show how visualizations leveraged with these discovered patterns can help users effectively and interactively explore multiple routes.
arXiv Detail & Related papers (2024-03-30T17:32:26Z) - GeoVLN: Learning Geometry-Enhanced Visual Representation with Slot
Attention for Vision-and-Language Navigation [52.65506307440127]
We propose GeoVLN, which learns Geometry-enhanced visual representation based on slot attention for robust Visual-and-Language Navigation.
We employ V&L BERT to learn a cross-modal representation that incorporate both language and vision informations.
arXiv Detail & Related papers (2023-05-26T17:15:22Z) - Efficient Pipelines for Vision-Based Context Sensing [0.24366811507669117]
There is an emergence of vision sources deployed worldwide. The cameras could be installed on roadside, in-house, and on mobile platforms.
However, the vision data collection and analytics are still highly manual today.
There are three major challenges for today's vision-based context sensing systems.
arXiv Detail & Related papers (2020-11-01T05:09:13Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Single-View View Synthesis with Multiplane Images [64.46556656209769]
We apply deep learning to generate multiplane images given two or more input images at known viewpoints.
Our method learns to predict a multiplane image directly from a single image input.
It additionally generates reasonable depth maps and fills in content behind the edges of foreground objects in background layers.
arXiv Detail & Related papers (2020-04-23T17:59:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.