A Software Visualization Approach for Multiple Visual Output Devices
- URL: http://arxiv.org/abs/2409.02620v1
- Date: Wed, 4 Sep 2024 11:27:47 GMT
- Title: A Software Visualization Approach for Multiple Visual Output Devices
- Authors: Malte Hansen, Heiko Bielfeldt, Armin Bernstetter, Tom Kwasnitschka, Wilhelm Hasselbring,
- Abstract summary: We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors.
Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances.
A preliminary study indicates that this environment can be useful for collaborative exploration of software cities.
- Score: 0.24466725954625887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As software systems grow, environments that not only facilitate program comprehension through software visualization but also enable collaborative exploration of software systems become increasingly important. Most approaches to software visualization focus on a single monitor as a visual output device, which offers limited immersion and lacks in potential for collaboration. More recent approaches address augmented and virtual reality environments to increase immersion and enable collaboration to facilitate program comprehension. We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors. Thereby, an increase in screen real estate and new use case scenarios for co-located environments are enabled. Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances. Multiple browser instances can then extend or complement each other's views with respect to a given configuration. The ARENA2, a spatially immersive visualization environment with five projectors, is used to showcase our approach. A preliminary study indicates that this environment can be useful for collaborative exploration of software cities. This publication is accompanied by a video. In addition, our implementation is open source and we invite other researchers to explore and adapt it for their use cases. Video URL: https://youtu.be/OiutBn3zIl8
Related papers
- Visual Integration of Static and Dynamic Software Analysis in Code Reviews via Software City Visualization [42.18762603890493]
Software visualization approaches for code reviews are often implemented as standalone applications, which use static code analysis.
In this paper, we report on the novel and in-progress design and implementation of a web-based approach capable of combining static and dynamic analysis data.
Our architectural tool design incorporates modern web technologies such as the integration into common Git hosting services.
arXiv Detail & Related papers (2024-08-15T13:19:55Z) - Multi-person eye tracking for real-world scene perception in social settings [34.82692226532414]
We apply mobile eye tracking in a real-world multi-person setup and develop a system to stream, record, and analyse synchronised data.
Our system achieves precise time synchronisation and accurate gaze projection in challenging dynamic scenes.
This advancement enables insight into collaborative behaviour, group dynamics, and social interaction, with high ecological validity.
arXiv Detail & Related papers (2024-07-08T19:33:17Z) - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [56.391404083287235]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale [2.462953128215088]
Active Visual Exploration (AVE) is a task that involves dynamically selecting observations (glimpses)
Existing mobile platforms equipped with optical zoom capabilities can capture glimpses of arbitrary positions and scales.
AdaGlimpse uses Soft Actor-Critic, a reinforcement learning algorithm tailored for exploration tasks, to select glimpses of arbitrary position and scale.
arXiv Detail & Related papers (2024-04-04T14:35:49Z) - EasyView: Bringing Performance Profiles into Integrated Development
Environments [3.9895667172326257]
We develop EasyView, a solution to integrate the interpretation and visualization of various profiling results in the coding environments.
First, we develop a generic data format, which enables EasyView to support mainstream profilers for different languages.
Second, we develop a set of customizable schemes to analyze and visualize the profiles in intuitive ways.
arXiv Detail & Related papers (2023-12-27T14:49:28Z) - EasyVolcap: Accelerating Neural Volumetric Video Research [69.59671164891725]
Volumetric video is a technology that digitally records dynamic events such as artistic performances, sporting events, and remote conversations.
EasyVolcap is a Python & Pytorch library for unifying the process of multi-view data processing, 4D scene reconstruction, and efficient dynamic volumetric video rendering.
arXiv Detail & Related papers (2023-12-11T17:59:46Z) - Collaborative, Code-Proximal Dynamic Software Visualization within Code
Editors [55.57032418885258]
This paper introduces the design and proof-of-concept implementation for a software visualization approach that can be embedded into code editors.
Our contribution differs from related work in that we use dynamic analysis of a software system's runtime behavior.
Our visualization approach enhances common remote pair programming tools and is collaboratively usable by employing shared code cities.
arXiv Detail & Related papers (2023-08-30T06:35:40Z) - Muscle Vision: Real Time Keypoint Based Pose Classification of Physical
Exercises [52.77024349608834]
3D human pose recognition extrapolated from video has advanced to the point of enabling real-time software applications.
We propose a new machine learning pipeline and web interface that performs human pose recognition on a live video feed to detect when common exercises are performed and classify them accordingly.
arXiv Detail & Related papers (2022-03-23T00:55:07Z) - Semantic Tracklets: An Object-Centric Representation for Visual
Multi-Agent Reinforcement Learning [126.57680291438128]
We study whether scalability can be achieved via a disentangled representation.
We evaluate semantic tracklets' on the visual multi-agent particle environment (VMPE) and on the challenging visual multi-agent GFootball environment.
Notably, this method is the first to successfully learn a strategy for five players in the GFootball environment using only visual data.
arXiv Detail & Related papers (2021-08-06T22:19:09Z) - AEGIS: A real-time multimodal augmented reality computer vision based
system to assist facial expression recognition for individuals with autism
spectrum disorder [93.0013343535411]
This paper presents the development of a multimodal augmented reality (AR) system which combines the use of computer vision and deep convolutional neural networks (CNN)
The proposed system, which we call AEGIS, is an assistive technology deployable on a variety of user devices including tablets, smartphones, video conference systems, or smartglasses.
We leverage both spatial and temporal information in order to provide an accurate expression prediction, which is then converted into its corresponding visualization and drawn on top of the original video frame.
arXiv Detail & Related papers (2020-10-22T17:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.