An Image-based Typology for Visualization
- URL: http://arxiv.org/abs/2403.05594v2
- Date: Wed, 20 Mar 2024 20:39:27 GMT
- Title: An Image-based Typology for Visualization
- Authors: Jian Chen, Petra Isenberg, Robert S. Laramee, Tobias Isenberg, Michael Sedlmair, Torsten Moeller, Rui Li,
- Abstract summary: We present and discuss the results of a qualitative analysis of visual representations from images.
We derive a typology of 10 visualization types of defined groups.
We provide a dataset of 6,833 tagged images and an online tool that can be used to explore and analyze the large set of labeled images.
- Score: 23.716718517642878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present and discuss the results of a qualitative analysis of visual representations from images. We labeled each image's essential stimuli, the removal of which would render a visualization uninterpretable. As a result, we derive a typology of 10 visualization types of defined groups. We describe the typology derivation process in which we engaged. The resulting typology and image analysis can serve a number of purposes: enabling researchers to study the evolution of the community and its research output over time, facilitating the categorization of visualization images for the purpose of research and teaching, allowing researchers and practitioners to identify visual design styles to further align the quantification of any visual information processor, be that a person or an algorithm observer, and it facilitates a discussion of standardization in visualization. In addition to the visualization typology from images, we provide a dataset of 6,833 tagged images and an online tool that can be used to explore and analyze the large set of labeled images. The tool and data set enable scholars to closely examine the diverse visual designs used and how they are published and communicated in our community. A pre-registration, a free copy of this paper, and all supplemental materials are available via osf.io/dxjwt.
Related papers
- Pixels to Prose: Understanding the art of Image Captioning [1.9635669040319872]
Image captioning enables machines to interpret visual content and generate descriptive text.
The review traces the evolution of image captioning models to the latest cutting-edge solutions.
The paper also delves into the application of image captioning in the medical domain.
arXiv Detail & Related papers (2024-08-28T11:21:23Z) - Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning [64.1316997189396]
We present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images.
Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets.
arXiv Detail & Related papers (2024-03-21T17:58:56Z) - Perceptual Grouping in Contrastive Vision-Language Models [59.1542019031645]
We show how vision-language models are able to understand where objects reside within an image and group together visually related parts of the imagery.
We propose a minimal set of modifications that results in models that uniquely learn both semantic and spatial information.
arXiv Detail & Related papers (2022-10-18T17:01:35Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Visual Clues: Bridging Vision and Language Foundations for Image
Paragraph Captioning [78.07495777674747]
We argue that by using visual clues to bridge large pretrained vision foundation models and language models, we can do so without any extra cross-modal training.
Thanks to the strong zero-shot capability of foundation models, we start by constructing a rich semantic representation of the image.
We use large language model to produce a series of comprehensive descriptions for the visual content, which is then verified by the vision model again to select the candidate that aligns best with the image.
arXiv Detail & Related papers (2022-06-03T22:33:09Z) - Automatic Image Content Extraction: Operationalizing Machine Learning in
Humanistic Photographic Studies of Large Visual Archives [81.88384269259706]
We introduce Automatic Image Content Extraction framework for machine learning-based search and analysis of large image archives.
The proposed framework can be applied in several domains in humanities and social sciences.
arXiv Detail & Related papers (2022-04-05T12:19:24Z) - Quantitative analysis of visual representation of sign elements in
COVID-19 context [2.9409535911474967]
We propose using computer analysis to perform a quantitative analysis of the elements used in the visual creations produced in reference to the epidemic.
The images compiled in The Covid Art Museum's Instagram account to analyze the different elements used to represent subjective experiences with regard to a global event.
This research reveals that the elements that are repeated in images to create narratives and the relations of association that are established in the sample.
arXiv Detail & Related papers (2021-12-15T15:54:53Z) - A survey of image labelling for computer vision applications [0.0]
Recent rise of deep learning algorithms for recognising image content has led to the emergence of ad-hoc labelling tools.
We perform a structured literature review to compile the underlying concepts and features of image labelling software.
arXiv Detail & Related papers (2021-04-18T16:01:55Z) - A Decade Survey of Content Based Image Retrieval using Deep Learning [13.778851745408133]
This paper presents a comprehensive survey of deep learning based developments in the past decade for content based image retrieval.
The similarity between the representative features of the query image and dataset images is used to rank the images for retrieval.
Deep learning has emerged as a dominating alternative of hand-designed feature engineering from a decade.
arXiv Detail & Related papers (2020-11-23T02:12:30Z) - Image Segmentation Using Deep Learning: A Survey [58.37211170954998]
Image segmentation is a key topic in image processing and computer vision.
There has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models.
arXiv Detail & Related papers (2020-01-15T21:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.