VisTopics: A Visual Semantic Unsupervised Approach to Topic Modeling of Video and Image Data
- URL: http://arxiv.org/abs/2505.14868v1
- Date: Tue, 20 May 2025 19:59:41 GMT
- Title: VisTopics: A Visual Semantic Unsupervised Approach to Topic Modeling of Video and Image Data
- Authors: Ayse D Lokmanoglu, Dror Walter,
- Abstract summary: This study introduces VisTopics, a computational framework designed to analyze large-scale visual datasets.<n>Applying VisTopics to a dataset of 452 NBC News videos resulted in reducing 11,070 frames to 6,928 deduplicated frames, which were then semantically analyzed to uncover 35 topics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding visual narratives is crucial for examining the evolving dynamics of media representation. This study introduces VisTopics, a computational framework designed to analyze large-scale visual datasets through an end-to-end pipeline encompassing frame extraction, deduplication, and semantic clustering. Applying VisTopics to a dataset of 452 NBC News videos resulted in reducing 11,070 frames to 6,928 deduplicated frames, which were then semantically analyzed to uncover 35 topics ranging from political events to environmental crises. By integrating Latent Dirichlet Allocation with caption-based semantic analysis, VisTopics demonstrates its potential to unravel patterns in visual framing across diverse contexts. This approach enables longitudinal studies and cross-platform comparisons, shedding light on the intersection of media, technology, and public discourse. The study validates the method's reliability through human coding accuracy metrics and emphasizes its scalability for communication research. By bridging the gap between visual representation and semantic meaning, VisTopics provides a transformative tool for advancing the methodological toolkit in computational media studies. Future research may leverage VisTopics for comparative analyses across media outlets or geographic regions, offering insights into the shifting landscapes of media narratives and their societal implications.
Related papers
- Automated Sentiment Classification and Topic Discovery in Large-Scale Social Media Streams [3.5279571333221913]
We present a framework for large-scale sentiment and topic analysis of Twitter discourse.<n>Our pipeline begins with targeted data collection using conflict-specific keywords.<n>We examine the relationship between sentiment and contextual features such as timestamp, geolocation, and lexical content.
arXiv Detail & Related papers (2025-05-03T18:04:57Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Few Shot Semantic Segmentation: a review of methodologies, benchmarks, and open challenges [5.0243930429558885]
Few-Shot Semantic is a novel task in computer vision, which aims at designing models capable of segmenting new semantic classes with only a few examples.
This paper consists of a comprehensive survey of Few-Shot Semantic, tracing its evolution and exploring various model designs.
arXiv Detail & Related papers (2023-04-12T13:07:37Z) - Cross-modal Semantic Enhanced Interaction for Image-Sentence Retrieval [8.855547063009828]
We propose a Cross-modal Semantic Enhanced Interaction method, termed CMSEI for image-sentence retrieval.
We first design the intra- and inter-modal spatial and semantic graphs based reasoning to enhance the semantic representations of objects.
To correlate the context of objects with the textual context, we further refine the visual semantic representation via the cross-level object-sentence and word-image based interactive attention.
arXiv Detail & Related papers (2022-10-17T10:01:16Z) - Panoptic Segmentation: A Review [2.270719568619559]
This paper presents the first comprehensive review of existing panoptic segmentation methods.
Panoptic segmentation is currently under study to help gain a more nuanced knowledge of the image scenes for video surveillance, crowd counting, self-autonomous driving, medical image analysis.
arXiv Detail & Related papers (2021-11-19T14:40:24Z) - SocialVisTUM: An Interactive Visualization Toolkit for Correlated Neural
Topic Models on Social Media Opinion Mining [0.07538606213726905]
Recent research in opinion mining proposed word embedding-based topic modeling methods.
We show how these methods can be used to display correlated topic models on social media texts using SocialVisTUM.
arXiv Detail & Related papers (2021-10-20T14:04:13Z) - From Show to Tell: A Survey on Image Captioning [48.98681267347662]
Connecting Vision and Language plays an essential role in Generative Intelligence.
Research in image captioning has not reached a conclusive answer yet.
This work aims at providing a comprehensive overview and categorization of image captioning approaches.
arXiv Detail & Related papers (2021-07-14T18:00:54Z) - Matching Visual Features to Hierarchical Semantic Topics for Image
Paragraph Captioning [50.08729005865331]
This paper develops a plug-and-play hierarchical-topic-guided image paragraph generation framework.
To capture the correlations between the image and text at multiple levels of abstraction, we design a variational inference network.
To guide the paragraph generation, the learned hierarchical topics and visual features are integrated into the language model.
arXiv Detail & Related papers (2021-05-10T06:55:39Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Spatio-Temporal Graph for Video Captioning with Knowledge Distillation [50.034189314258356]
We propose a graph model for video captioning that exploits object interactions in space and time.
Our model builds interpretable links and is able to provide explicit visual grounding.
To avoid correlations caused by the variable number of objects, we propose an object-aware knowledge distillation mechanism.
arXiv Detail & Related papers (2020-03-31T03:58:11Z) - Image Segmentation Using Deep Learning: A Survey [58.37211170954998]
Image segmentation is a key topic in image processing and computer vision.
There has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models.
arXiv Detail & Related papers (2020-01-15T21:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.