Survey of Vision-Language-Action Models for Embodied Manipulation
- URL: http://arxiv.org/abs/2508.15201v1
- Date: Thu, 21 Aug 2025 03:30:04 GMT
- Title: Survey of Vision-Language-Action Models for Embodied Manipulation
- Authors: Haoran Li, Yuhui Chen, Wenbo Cui, Weiheng Liu, Kai Liu, Mingcai Zhou, Zhengtao Zhang, Dongbin Zhao,
- Abstract summary: Embodied intelligence systems enhance agent capabilities through continuous environment interactions.<n>Vision-Language-Action models, inspired by advancements in large foundation models, serve as universal robotic control frameworks.<n>This survey comprehensively reviews VLA models for embodied manipulation.
- Score: 12.586030711502858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embodied intelligence systems, which enhance agent capabilities through continuous environment interactions, have garnered significant attention from both academia and industry. Vision-Language-Action models, inspired by advancements in large foundation models, serve as universal robotic control frameworks that substantially improve agent-environment interaction capabilities in embodied intelligence systems. This expansion has broadened application scenarios for embodied AI robots. This survey comprehensively reviews VLA models for embodied manipulation. Firstly, it chronicles the developmental trajectory of VLA architectures. Subsequently, we conduct a detailed analysis of current research across 5 critical dimensions: VLA model structures, training datasets, pre-training methods, post-training methods, and model evaluation. Finally, we synthesize key challenges in VLA development and real-world deployment, while outlining promising future research directions.
Related papers
- Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning [124.48672228625821]
We introduce Vlaser - a Vision-Language-Action Model with synergistic embodied reasoning capability.<n>Vlaser achieves state-of-the-art performance across a range of embodied reasoning benchmarks.<n>Our approach achieves state-of-the-art results on the WidowX benchmark and competitive performance on the Google Robot benchmark.
arXiv Detail & Related papers (2025-10-13T05:51:22Z) - Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications [36.083681915495205]
Vision-Language-Action (VLA) models aim to learn policies that generalise across diverse tasks, objects, embodiments, and environments.<n>This work offers a comprehensive, full-stack review, integrating both software and hardware components of VLA systems.
arXiv Detail & Related papers (2025-10-08T14:38:25Z) - Pure Vision Language Action (VLA) Models: A Comprehensive Survey [16.014856048038272]
The emergence of Vision Language Action (VLA) models marks a paradigm shift from traditional policy-based control to generalized robotics.<n>This survey delves into advanced VLA methods, aiming to provide a clear taxonomy and a systematic, comprehensive review of existing research.
arXiv Detail & Related papers (2025-09-23T13:53:52Z) - Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey [45.10095869091538]
Vision-Language-Action (VLA) models, built upon Large Vision-Language Models (VLMs) pretrained on vast image-text datasets, have emerged as a transformative paradigm.<n>This survey provides the first systematic, taxonomy-oriented review of large VLM-based VLA models for robotic manipulation.
arXiv Detail & Related papers (2025-08-18T16:45:48Z) - Edge General Intelligence Through World Models and Agentic AI: Fundamentals, Solutions, and Challenges [87.02855999212817]
Edge General Intelligence (EGI) represents a transformative evolution of edge computing, where distributed agents possess the capability to perceive, reason, and act autonomously.<n>World models act as proactive internal simulators that not only predict but also actively imagine future trajectories, reason under uncertainty, and plan multi-step actions with foresight.<n>This survey bridges the gap by offering a comprehensive analysis of how world models can empower agentic artificial intelligence (AI) systems at the edge.
arXiv Detail & Related papers (2025-08-13T07:29:40Z) - Towards Embodied Agentic AI: Review and Classification of LLM- and VLM-Driven Robot Autonomy and Interaction [0.4786416643636131]
Foundation models, including large language models (LLMs) and vision-language models (VLMs) have recently enabled novel approaches to robot autonomy and human-robot interfaces.<n>In parallel, vision-language-action models (VLAs) or large behavior models (LBMs) are increasing the dexterity and capabilities of robotic systems.
arXiv Detail & Related papers (2025-08-07T11:48:03Z) - Parallels Between VLA Model Post-Training and Human Motor Learning: Progress, Challenges, and Trends [11.678954304546988]
Vision-language-action (VLA) models extend vision-language models (VLM)<n>This paper reviews post-training strategies for VLA models through the lens of human motor learning.
arXiv Detail & Related papers (2025-06-26T03:06:57Z) - World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks [55.90051810762702]
We present a comprehensive overview of world models, highlighting their architecture, training paradigms, and applications across prediction, generation, planning, and causal reasoning.<n>We propose Wireless Dreamer, a novel world model-based reinforcement learning framework tailored for wireless edge intelligence optimization.
arXiv Detail & Related papers (2025-05-31T06:43:00Z) - A Survey of World Models for Autonomous Driving [63.33363128964687]
Recent breakthroughs in autonomous driving have been propelled by advances in robust world modeling.<n>World models offer high-fidelity representations of the driving environment that integrate multi-sensor data, semantic cues, and temporal dynamics.<n>This paper systematically reviews recent advances in world models for autonomous driving.
arXiv Detail & Related papers (2025-01-20T04:00:02Z) - Benchmarking Vision, Language, & Action Models on Robotic Learning Tasks [20.93006455952299]
Vision-language-action (VLA) models represent a promising direction for developing general-purpose robotic systems.<n>We present a comprehensive evaluation framework and benchmark suite for assessing VLA models.
arXiv Detail & Related papers (2024-11-04T18:01:34Z) - A Survey on Vision-Language-Action Models for Embodied AI [71.16123093739932]
Embodied AI is widely recognized as a key element of artificial general intelligence.<n>A new category of multimodal models has emerged to address language-conditioned robotic tasks in embodied AI.<n>We present the first survey on vision-language-action models for embodied AI.
arXiv Detail & Related papers (2024-05-23T01:43:54Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.