OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence
- URL: http://arxiv.org/abs/2602.08683v2
- Date: Sun, 15 Feb 2026 18:16:11 GMT
- Title: OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence
- Authors: Feilong Tang, Xiang An, Yunyao Yan, Yin Xie, Bin Qin, Kaicheng Yang, Yifei Shen, Yuanhan Zhang, Chunyuan Li, Shikun Feng, Changrui Chen, Huajie Tan, Ming Hu, Manyuan Zhang, Bo Li, Ziyong Feng, Ziwei Liu, Zongyuan Ge, Jiankang Deng,
- Abstract summary: OneVision-Encoder encodes video by compressing visual structure into semantic meaning.<n>Codec-aligned, patch-level sparsity is a foundational principle, enabling OV-Encoder as a scalable engine for next-generation visual generalists.
- Score: 113.73007911004446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hypothesis. Artificial general intelligence is, at its core, a compression problem. Effective compression demands resonance: deep learning scales best when its architecture aligns with the fundamental structure of the data. These are the fundamental principles. Yet, modern vision architectures have strayed from these truths: visual signals are highly redundant, while discriminative information, the surprise, is sparse. Current models process dense pixel grids uniformly, wasting vast compute on static background rather than focusing on the predictive residuals that define motion and meaning. We argue that to solve visual understanding, we must align our architectures with the information-theoretic principles of video, i.e., Codecs. Method. OneVision-Encoder encodes video by compressing predictive visual structure into semantic meaning. By adopting Codec Patchification, OV-Encoder abandons uniform computation to focus exclusively on the 3.1%-25% of regions rich in signal entropy. To unify spatial and temporal reasoning under irregular token layouts, OneVision-Encoder employs a shared 3D RoPE and is trained with a large-scale cluster discrimination objective over more than one million semantic concepts, jointly capturing object permanence and motion dynamics. Evidence. The results validate our core hypothesis: efficiency and accuracy are not a trade-off; they are positively correlated. When integrated into LLM, it consistently outperforms strong vision backbones such as Qwen3-ViT and SigLIP2 across 16 image, video, and document understanding benchmarks, despite using substantially fewer visual tokens and pretraining data. Notably, on video understanding tasks, OV-Encoder achieves an average improvement of 4.1% over Qwen3-ViT. Codec-aligned, patch-level sparsity is a foundational principle, enabling OV-Encoder as a scalable engine for next-generation visual generalists.
Related papers
- ApET: Approximation-Error Guided Token Compression for Efficient VLMs [16.4657793751671]
We present ApET, an Approximation-Error guided Token compression framework.<n>We show that ApET retains 95.2% of the original performance on image-understanding tasks and even attains 100.4% on video-understanding tasks.<n>Thanks to its attention-free design, ApET seamlessly integrates with FlashAttention, enabling further inference and making VLM deployment more practical.
arXiv Detail & Related papers (2026-02-23T14:15:37Z) - OpenVision 3: A Family of Unified Visual Encoder for Both Understanding and Generation [101.82480298904225]
This paper presents a family of advanced vision encoders, named OpenVision 3, that learns a single, unified visual representation.<n>Our core architecture is simple: we feed VAE-compressed image latents to a ViT encoder and train its output to support two complementary roles.<n>For multimodal understanding, we plug the encoder into the LLaVA-1.5 framework; for generation, we test it under the RAE framework.
arXiv Detail & Related papers (2026-01-21T18:47:12Z) - VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction [83.50898344094153]
VQRAE produces Continuous semantic features for image understanding and tokens for visual generation within a unified tokenizer.<n>Design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens.<n>VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction.
arXiv Detail & Related papers (2025-11-28T17:26:34Z) - Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens [54.18057944158818]
Chain-of-Visual-Thought (COVT) is a framework that enables Vision-Language Models (VLMs) to reason through continuous visual tokens.<n>Within a small budget of roughly 20 tokens, COVT distills knowledge from lightweight vision experts.<n>During training, the VLM with COVT autoregressively predicts visual tokens to reconstruct dense supervision signals.
arXiv Detail & Related papers (2025-11-24T18:55:19Z) - Generative Latent Video Compression [26.99743586846841]
We present Generative Latent Video Compression (GLVC), an effective framework for perceptual video compression.<n>GLVC employs a pretrained continuous tokenizer to project video frames into a perceptually aligned latent space.<n>We show GLVC achieves state-of-the-art performance in terms of DISTS and LPIPS metrics.
arXiv Detail & Related papers (2025-10-11T03:28:49Z) - Unveiling Encoder-Free Vision-Language Models [62.52803514667452]
Existing vision-language models (VLMs) mostly rely on vision encoders to extract visual features followed by large language models (LLMs) for visual-language tasks.
We bridge the gap between encoder-based and encoder-free models, and present a simple yet effective training recipe towards pure VLMs.
We launch EVE, an encoder-free vision-language model that can be trained and forwarded efficiently.
arXiv Detail & Related papers (2024-06-17T17:59:44Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.