A Software Visualization Approach for Multiple Visual Output Devices
- URL: http://arxiv.org/abs/2409.02620v1
- Date: Wed, 4 Sep 2024 11:27:47 GMT
- Title: A Software Visualization Approach for Multiple Visual Output Devices
- Authors: Malte Hansen, Heiko Bielfeldt, Armin Bernstetter, Tom Kwasnitschka, Wilhelm Hasselbring,
- Abstract summary: We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors.
Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances.
A preliminary study indicates that this environment can be useful for collaborative exploration of software cities.
- Score: 0.24466725954625887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As software systems grow, environments that not only facilitate program comprehension through software visualization but also enable collaborative exploration of software systems become increasingly important. Most approaches to software visualization focus on a single monitor as a visual output device, which offers limited immersion and lacks in potential for collaboration. More recent approaches address augmented and virtual reality environments to increase immersion and enable collaboration to facilitate program comprehension. We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors. Thereby, an increase in screen real estate and new use case scenarios for co-located environments are enabled. Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances. Multiple browser instances can then extend or complement each other's views with respect to a given configuration. The ARENA2, a spatially immersive visualization environment with five projectors, is used to showcase our approach. A preliminary study indicates that this environment can be useful for collaborative exploration of software cities. This publication is accompanied by a video. In addition, our implementation is open source and we invite other researchers to explore and adapt it for their use cases. Video URL: https://youtu.be/OiutBn3zIl8
Related papers
- ZenSVI: An Open-Source Software for the Integrated Acquisition, Processing and Analysis of Street View Imagery Towards Scalable Urban Science [1.5494074223643037]
Street view imagery (SVI) has been instrumental in many studies in the past decade to understand and characterize street features and the built environment.
We develop ZenSVI, a free and open-source Python package that integrates and implements the entire process of SVI analysis.
arXiv Detail & Related papers (2024-12-24T07:13:17Z) - Real-Time Position-Aware View Synthesis from Single-View Input [3.2873782624127834]
We present a lightweight, position-aware network designed for real-time view synthesis from a single input image and a target pose.
This work marks a step toward enabling real-time view synthesis from a single image for live and interactive applications.
arXiv Detail & Related papers (2024-12-18T16:20:21Z) - Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction [69.57190742976091]
We introduce Aguvis, a unified vision-based framework for autonomous GUI agents.
Our approach leverages image-based observations, and grounding instructions in natural language to visual elements.
To address the limitations of previous work, we integrate explicit planning and reasoning within the model.
arXiv Detail & Related papers (2024-12-05T18:58:26Z) - Visual Integration of Static and Dynamic Software Analysis in Code Reviews via Software City Visualization [42.18762603890493]
Software visualization approaches for code reviews are often implemented as standalone applications, which use static code analysis.
In this paper, we report on the novel and in-progress design and implementation of a web-based approach capable of combining static and dynamic analysis data.
Our architectural tool design incorporates modern web technologies such as the integration into common Git hosting services.
arXiv Detail & Related papers (2024-08-15T13:19:55Z) - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [61.143381152739046]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - EasyView: Bringing Performance Profiles into Integrated Development
Environments [3.9895667172326257]
We develop EasyView, a solution to integrate the interpretation and visualization of various profiling results in the coding environments.
First, we develop a generic data format, which enables EasyView to support mainstream profilers for different languages.
Second, we develop a set of customizable schemes to analyze and visualize the profiles in intuitive ways.
arXiv Detail & Related papers (2023-12-27T14:49:28Z) - EasyVolcap: Accelerating Neural Volumetric Video Research [69.59671164891725]
Volumetric video is a technology that digitally records dynamic events such as artistic performances, sporting events, and remote conversations.
EasyVolcap is a Python & Pytorch library for unifying the process of multi-view data processing, 4D scene reconstruction, and efficient dynamic volumetric video rendering.
arXiv Detail & Related papers (2023-12-11T17:59:46Z) - Collaborative, Code-Proximal Dynamic Software Visualization within Code
Editors [55.57032418885258]
This paper introduces the design and proof-of-concept implementation for a software visualization approach that can be embedded into code editors.
Our contribution differs from related work in that we use dynamic analysis of a software system's runtime behavior.
Our visualization approach enhances common remote pair programming tools and is collaboratively usable by employing shared code cities.
arXiv Detail & Related papers (2023-08-30T06:35:40Z) - Semantic Tracklets: An Object-Centric Representation for Visual
Multi-Agent Reinforcement Learning [126.57680291438128]
We study whether scalability can be achieved via a disentangled representation.
We evaluate semantic tracklets' on the visual multi-agent particle environment (VMPE) and on the challenging visual multi-agent GFootball environment.
Notably, this method is the first to successfully learn a strategy for five players in the GFootball environment using only visual data.
arXiv Detail & Related papers (2021-08-06T22:19:09Z) - AEGIS: A real-time multimodal augmented reality computer vision based
system to assist facial expression recognition for individuals with autism
spectrum disorder [93.0013343535411]
This paper presents the development of a multimodal augmented reality (AR) system which combines the use of computer vision and deep convolutional neural networks (CNN)
The proposed system, which we call AEGIS, is an assistive technology deployable on a variety of user devices including tablets, smartphones, video conference systems, or smartglasses.
We leverage both spatial and temporal information in order to provide an accurate expression prediction, which is then converted into its corresponding visualization and drawn on top of the original video frame.
arXiv Detail & Related papers (2020-10-22T17:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.