Reconstruction-Driven Multimodal Representation Learning for Automated Media Understanding
- URL: http://arxiv.org/abs/2511.17596v1
- Date: Mon, 17 Nov 2025 19:13:51 GMT
- Title: Reconstruction-Driven Multimodal Representation Learning for Automated Media Understanding
- Authors: Yassir Benhammou, Suman Kalyan, Sujay Kumar,
- Abstract summary: We propose a Multimodal Autoencoder that learns unified representations across text, audio, and visual data.<n>We demonstrate significant improvements in clustering and alignment metrics compared to linear baselines.<n>Results highlight the potential of reconstruction-driven multimodal learning to enhance automation, searchability, and content management efficiency in modern broadcast.
- Score: 0.1411701037241356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Broadcast and media organizations increasingly rely on artificial intelligence to automate the labor-intensive processes of content indexing, tagging, and metadata generation. However, existing AI systems typically operate on a single modality-such as video, audio, or text-limiting their understanding of complex, cross-modal relationships in broadcast material. In this work, we propose a Multimodal Autoencoder (MMAE) that learns unified representations across text, audio, and visual data, enabling end-to-end automation of metadata extraction and semantic clustering. The model is trained on the recently introduced LUMA dataset, a fully aligned benchmark of multimodal triplets representative of real-world media content. By minimizing joint reconstruction losses across modalities, the MMAE discovers modality-invariant semantic structures without relying on large paired or contrastive datasets. We demonstrate significant improvements in clustering and alignment metrics (Silhouette, ARI, NMI) compared to linear baselines, indicating that reconstruction-based multimodal embeddings can serve as a foundation for scalable metadata generation and cross-modal retrieval in broadcast archives. These results highlight the potential of reconstruction-driven multimodal learning to enhance automation, searchability, and content management efficiency in modern broadcast workflows.
Related papers
- Dissecting Multimodal In-Context Learning: Modality Asymmetries and Circuit Dynamics in modern Transformers [59.472505916020936]
We investigate how transformers learn to associate information across modalities from in-context examples.<n>We revisit core principles of unimodal ICL in modern transformers.<n>Mechanistic analysis shows that both settings rely on an induction-style mechanism that copies labels from matching in-context exemplars.
arXiv Detail & Related papers (2026-01-28T17:37:28Z) - LADLE-MM: Limited Annotation based Detector with Learned Ensembles for Multimodal Misinformation [8.769506450302154]
LADLE-MM is a model-soup multimodal misinformation detector with Learned Ensembles for Multimodal Misinformation.<n>It is composed of two unimodal branches and a third multimodal one that enhances image and text representations.<n>It achieves competitive performance on both binary and multi-label classification tasks.
arXiv Detail & Related papers (2025-12-23T11:14:58Z) - Scaling Beyond Context: A Survey of Multimodal Retrieval-Augmented Generation for Document Understanding [61.36285696607487]
Document understanding is critical for applications from financial analysis to scientific discovery.<n>Current approaches, whether OCR-based pipelines feeding Large Language Models (LLMs) or native Multimodal LLMs (MLLMs) face key limitations.<n>Retrieval-Augmented Generation (RAG) helps ground models in external data, but documents' multimodal nature, combining text, tables, charts, and layout, demands a more advanced paradigm: Multimodal RAG.
arXiv Detail & Related papers (2025-10-17T02:33:16Z) - Multimodal RAG for Unstructured Data:Leveraging Modality-Aware Knowledge Graphs with Hybrid Retrieval [1.160208922584163]
We present a Modality-Aware Hybrid retrieval Architecture (MAHA) for multimodal question answering with reasoning through a modality-aware knowledge graph.<n>MAHA integrates dense vector retrieval with structured graph traversal, where the knowledge graph encodes cross-modal semantics and relationships.<n>Our work establishes a scalable and interpretable retrieval framework that advances RAG systems by enabling modality-aware reasoning over unstructured multimodal data.
arXiv Detail & Related papers (2025-10-16T11:55:24Z) - NExT-OMNI: Towards Any-to-Any Omnimodal Foundation Models with Discrete Flow Matching [64.10695425442164]
We introduce NExT-OMNI, an open-source omnimodal foundation model that achieves unified modeling through discrete flow paradigms.<n>Trained on large-scale interleaved text, image, video, and audio data, NExT-OMNI delivers competitive performance on multimodal generation and understanding benchmarks.<n>To advance further research, we release training details, data protocols, and open-source both the code and model checkpoints.
arXiv Detail & Related papers (2025-10-15T16:25:18Z) - OneCAT: Decoder-Only Auto-Regressive Model for Unified Understanding and Generation [91.45421429922506]
OneCAT is a unified multimodal model that seamlessly integrates understanding, generation, and editing.<n>Our framework eliminates the need for external components such as Vision Transformers (ViT) or vision tokenizer during inference.
arXiv Detail & Related papers (2025-09-03T17:29:50Z) - Latent Multimodal Reconstruction for Misinformation Detection [15.66049149213069]
Multimodal misinformation, such as miscaptioned images, poses a growing challenge in the digital age.<n>We introduce "Miscaption This!", a collection of LVLM-generated miscaptioned image datasets.<n>We also introduce "Latent Multimodal Reconstruction" (LAMAR), a network trained to reconstruct the embeddings of truthful captions.
arXiv Detail & Related papers (2025-04-08T13:16:48Z) - GridMind: A Multi-Agent NLP Framework for Unified, Cross-Modal NFL Data Insights [0.0]
This paper introduces GridMind, a framework that unifies structured, semi-structured, and unstructured data through Retrieval-Augmented Generation (RAG) and large language models (LLMs)<n>This approach aligns with the evolving field of multimodal representation learning, where unified models are increasingly essential for real-time, cross-modal interactions.
arXiv Detail & Related papers (2025-03-24T18:33:36Z) - Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - OmniDataComposer: A Unified Data Structure for Multimodal Data Fusion
and Infinite Data Generation [8.149870655785955]
OmniDataComposer is an innovative approach for multimodal data fusion and unlimited data generation.
It is capable of identifying over 6400 categories of objects, substantially broadening the spectrum of visual information.
It amalgamates diverse modalities, promoting reciprocal enhancement among modalities and facilitating cross-modal data correction.
arXiv Detail & Related papers (2023-08-08T08:30:16Z) - Learning Multimodal Data Augmentation in Feature Space [65.54623807628536]
LeMDA is an easy-to-use method that automatically learns to jointly augment multimodal data in feature space.
We show that LeMDA can profoundly improve the performance of multimodal deep learning architectures.
arXiv Detail & Related papers (2022-12-29T20:39:36Z) - High-Modality Multimodal Transformer: Quantifying Modality & Interaction
Heterogeneity for High-Modality Representation Learning [112.51498431119616]
This paper studies efficient representation learning for high-modality scenarios involving a large set of diverse modalities.
A single model, HighMMT, scales up to 10 modalities (text, image, audio, video, sensors, proprioception, speech, time-series, sets, and tables) and 15 tasks from 5 research areas.
arXiv Detail & Related papers (2022-03-02T18:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.