How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
- URL: http://arxiv.org/abs/2508.20279v1
- Date: Wed, 27 Aug 2025 21:22:01 GMT
- Title: How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
- Authors: Zhuoran Yu, Yong Jae Lee,
- Abstract summary: We introduce a probing framework to analyze how MLLMs process visual and textual inputs across layers.<n>We show that while the overall stage-wise structure remains stable across variations in visual tokenization, instruction tuning data, and pretraining corpus, the specific layer allocation to each stage shifts.
- Score: 39.342366994703376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal Large Language Models (MLLMs) have demonstrated strong performance across a wide range of vision-language tasks, yet their internal processing dynamics remain underexplored. In this work, we introduce a probing framework to systematically analyze how MLLMs process visual and textual inputs across layers. We train linear classifiers to predict fine-grained visual categories (e.g., dog breeds) from token embeddings extracted at each layer, using a standardized anchor question. To uncover the functional roles of different layers, we evaluate these probes under three types of controlled prompt variations: (1) lexical variants that test sensitivity to surface-level changes, (2) semantic negation variants that flip the expected answer by modifying the visual concept in the prompt, and (3) output format variants that preserve reasoning but alter the answer format. Applying our framework to LLaVA-1.5, LLaVA-Next-LLaMA-3, and Qwen2-VL, we identify a consistent stage-wise structure in which early layers perform visual grounding, middle layers support lexical integration and semantic reasoning, and final layers prepare task-specific outputs. We further show that while the overall stage-wise structure remains stable across variations in visual tokenization, instruction tuning data, and pretraining corpus, the specific layer allocation to each stage shifts notably with changes in the base LLM architecture. Our findings provide a unified perspective on the layer-wise organization of MLLMs and offer a lightweight, model-agnostic approach for analyzing multimodal representation dynamics.
Related papers
- iGVLM: Dynamic Instruction-Guided Vision Encoding for Question-Aware Multimodal Understanding [15.347813563177253]
iGVLM is a framework for instruction-guided visual modulation.<n>We show that iGVLM consistently enhances instruction sensitivity across diverse language backbones.
arXiv Detail & Related papers (2026-03-03T08:49:41Z) - Stateful Cross-layer Vision Modulation [19.730096071316876]
multimodal large language models (MLLMs) widely adopt multi-layer visual feature fusion to enhance visual representation.<n>Existing approaches typically perform static concatenation or weighted aggregation after visual encoding, without intervening in the representation formation process itself.<n>We propose a cross-layer memory-modulated vision framework(SCVM) to address these limitations.
arXiv Detail & Related papers (2026-02-28T13:57:19Z) - From One-to-One to Many-to-Many: Dynamic Cross-Layer Injection for Deep Vision-Language Fusion [91.35078719566472]
Vision-Language Models (VLMs) create a severe visual feature bottleneck by using a crude, asymmetric connection.<n>We introduce Cross-Layer Injection (CLI), a novel and lightweight framework that forges a dynamic many-to-many bridge between the two modalities.
arXiv Detail & Related papers (2026-01-15T18:59:10Z) - Multimodal Continual Learning with MLLMs from Multi-scenario Perspectives [61.64550292163646]
Continual learning in visual understanding aims to deal with catastrophic forgetting in Multimodal Large Language Models (MLLMs)<n>We construct a multimodal visual understanding dataset (MSVQA) encompassing four different scenarios and perspectives.<n>We propose mUltimodal coNtInual learning with MLLMs From multi-scenarIo pERspectives (UNIFIER) to address visual discrepancies while learning different scenarios.
arXiv Detail & Related papers (2025-11-23T15:47:49Z) - Dynamic Embedding of Hierarchical Visual Features for Efficient Vision-Language Fine-Tuning [5.85033069870214]
We propose an efficient vision-language fine-tuning method based on dynamic embedding and fusion of hierarchical visual features.<n>By fine-tuning only a small number of parameters, DEHVF achieves precise alignment and complement of cross-modal information.
arXiv Detail & Related papers (2025-08-25T03:57:46Z) - Rethinking Visual Layer Selection in Multimodal LLMs [46.091556112958884]
This work proposes a Layer-wise Similarity approach to group CLIP-ViT layers with similar behaviors into shallow, middle, and deep categories.<n>We revisit the visual layer selection problem in MLLMs at scale, training LLaVA-style models ranging from 1.4B to 7B parameters.<n>We find that: (1) deep layers are essential for OCR tasks; (2) shallow and middle layers substantially outperform deep layers on reasoning tasks involving counting, positioning, and object localization; and (3) a lightweight fusion of features across shallow, middle, and deep layers consistently outperforms specialized fusion baselines and single-
arXiv Detail & Related papers (2025-04-30T09:07:10Z) - Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model [63.14883657299359]
Multi-modal Large Language Models (MLLMs) integrate visual and linguistic reasoning to address complex tasks such as image captioning and visual question answering.<n> tuning MLLMs for downstream tasks encounters two key challenges: Task-Expert, where distribution shifts between pre-training and target datasets constrain target performance, and OpenWorld Stabilization, where catastrophic forgetting erases the model general knowledge.
arXiv Detail & Related papers (2025-03-06T15:29:13Z) - Instruction-Guided Fusion of Multi-Layer Visual Features in Large Vision-Language Models [50.98559225639266]
We investigate the contributions of visual features from different encoder layers using 18 benchmarks spanning 6 task categories.<n>Our findings reveal that multilayer features provide complementary strengths with varying task dependencies, and uniform fusion leads to suboptimal performance.<n>We propose the instruction-guided vision aggregator, a module that dynamically integrates multi-layer visual features based on textual instructions.
arXiv Detail & Related papers (2024-12-26T05:41:31Z) - Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [73.40350756742231]
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning.
Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored.
arXiv Detail & Related papers (2024-02-12T18:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.