Contrastive Representation Regularization for Vision-Language-Action Models
- URL: http://arxiv.org/abs/2510.01711v2
- Date: Mon, 13 Oct 2025 07:50:27 GMT
- Title: Contrastive Representation Regularization for Vision-Language-Action Models
- Authors: Taeyoung Kim, Jimin Lee, Myungkyu Koo, Dongyoung Kim, Kyungmin Lee, Changyeon Kim, Younggyo Seo, Jinwoo Shin,
- Abstract summary: We introduce Robot State-aware Contrastive Loss (RS-CL), a representation regularization for Vision-Language-Action (VLA) models.<n>In particular, RS-CL aligns the representations more closely with the robot's proprioceptive states, by using relative distances between the states as soft supervision.<n>Our empirical results demonstrate that RS-CL substantially improves the manipulation performance of state-of-the-art VLA models.
- Score: 64.10170453130324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-Language-Action (VLA) models have shown its capabilities in robot manipulation by leveraging rich representations from pre-trained Vision-Language Models (VLMs). However, their representations arguably remain suboptimal, lacking sensitivity to robotic signals such as control actions and proprioceptive states. To address the issue, we introduce Robot State-aware Contrastive Loss (RS-CL), a simple and effective representation regularization for VLA models, designed to bridge the gap between VLM representations and robotic signals. In particular, RS-CL aligns the representations more closely with the robot's proprioceptive states, by using relative distances between the states as soft supervision. Complementing the original action prediction objective, RS-CL effectively enhances control-relevant representation learning, while being lightweight and fully compatible with standard VLA training pipeline. Our empirical results demonstrate that RS-CL substantially improves the manipulation performance of state-of-the-art VLA models; it pushes the prior art from 30.8% to 41.5% on pick-and-place tasks in RoboCasa-Kitchen, through more accurate positioning during grasping and placing, and boosts success rates from 45.0% to 58.3% on challenging real-robot manipulation tasks.
Related papers
- Universal Pose Pretraining for Generalizable Vision-Language-Action Policies [83.39008378156647]
Existing Vision-Language-Action (VLA) models often suffer from feature collapse and low training efficiency.<n>We propose Pose-VLA, a decoupled paradigm that separates VLA training into a pre-training phase for extracting universal 3D spatial priors.<n>Our framework follows a two-stage pre-training pipeline, establishing fundamental spatial grounding via poses followed by motion alignment.
arXiv Detail & Related papers (2026-02-23T11:00:08Z) - Robotic VLA Benefits from Joint Learning with Motion Image Diffusion [114.60268819583017]
Vision-Language-Action (VLA) models have achieved remarkable progress in robotic manipulation by mapping multimodal observations and instructions directly to actions.<n>We propose joint learning with motion image diffusion, a novel strategy that enhances VLA models with motion reasoning capabilities.<n> Experiments in both simulation and real-world environments demonstrate that joint learning with motion image diffusion improves the success rate of pi-series VLAs to 97.5%.
arXiv Detail & Related papers (2025-12-19T19:07:53Z) - Towards Accessible Physical AI: LoRA-Based Fine-Tuning of VLA Models for Real-World Robot Control [0.0]
This paper presents an efficient fine-tuning methodology and real-world deployment analysis for adapting VLA models to low-cost robotic manipulation systems.<n>We propose a resource-efficient fine-tuning strategy using Low-Rank Adaptation (LoRA) and quantization techniques.<n>Our methodology addresses the critical challenge of adapting pre-trained VLA models to new robot embodiments with limited demonstration data.
arXiv Detail & Related papers (2025-12-11T16:25:30Z) - EvoVLA: Self-Evolving Vision-Language-Action Model [11.746804244345613]
Long-horizon robotic manipulation remains challenging for Vision-Language-Action (VLA) models.<n>We present EvoVLA, a self-supervised VLA framework that addresses this issue through three complementary components.<n>EvoVLA achieves one-and-a-half times better sample efficiency and reduces stage hallucination from 38.5 percent to 14.8 percent.
arXiv Detail & Related papers (2025-11-20T09:08:33Z) - ROSA: Harnessing Robot States for Vision-Language and Action Alignment [24.426285156386715]
Vision-Language Models (VLMs) have made significant advance in end-to-end robotic control.<n>We propose a novel training paradigm, ROSA, which leverages robot state estimation to improve alignment between vision-language and action spaces.
arXiv Detail & Related papers (2025-06-16T16:34:20Z) - From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation [35.79160868966466]
We propose FSD (From Seeing to Doing), a novel vision-language model that generates intermediate representations through spatial relationship reasoning.<n>Our approach combines a hierarchical data pipeline for training with a self-consistency mechanism that aligns spatial coordinates with visual signals.<n>We show that FSD achieves 40.6% success rate in SimplerEnv and 72% success rate across 8 real-world tasks, outperforming the strongest baseline by 30%.
arXiv Detail & Related papers (2025-05-13T13:20:46Z) - CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models [89.44024245194315]
We introduce a method that incorporates explicit visual chain-of-thought (CoT) reasoning into vision-language-action models (VLAs)<n>We introduce CoT-VLA, a state-of-the-art 7B VLA that can understand and generate visual and action tokens.<n>Our experimental results demonstrate that CoT-VLA achieves strong performance, outperforming the state-of-the-art VLA model by 17% in real-world manipulation tasks and 6% in simulation benchmarks.
arXiv Detail & Related papers (2025-03-27T22:23:04Z) - TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies [95.30717188630432]
We introduce visual trace prompting to facilitate VLA models' spatial-temporal awareness for action prediction.<n>We develop a new TraceVLA model by finetuning OpenVLA on our own collected dataset of 150K robot manipulation trajectories.<n>We present a compact VLA model based on 4B Phi-3-Vision, pretrained on the Open-X-Embodiment and finetuned on our dataset.
arXiv Detail & Related papers (2024-12-13T18:40:51Z) - CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation [100.25567121604382]
Vision-Language-Action (VLA) models have improved robotic manipulation in terms of language-guided task execution and generalization to unseen scenarios.<n>We present a new advanced VLA architecture derived from Vision-Language-Models (VLM)<n>We show that our model not only significantly surpasses existing VLAs in task performance and but also exhibits remarkable adaptation to new robots and generalization to unseen objects and backgrounds.
arXiv Detail & Related papers (2024-11-29T12:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.