Towards Autonomous Micromobility through Scalable Urban Simulation
- URL: http://arxiv.org/abs/2505.00690v1
- Date: Thu, 01 May 2025 17:52:29 GMT
- Title: Towards Autonomous Micromobility through Scalable Urban Simulation
- Authors: Wayne Wu, Honglin He, Chaoyuan Zhang, Jack He, Seth Z. Zhao, Ran Gong, Quanyi Li, Bolei Zhou,
- Abstract summary: Current micromobility depends mostly on human manual operation (in-person or remote control)<n>In this work, we present a scalable urban simulation solution to advance autonomous micromobility.
- Score: 52.749987132021324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Micromobility, which utilizes lightweight mobile machines moving in urban public spaces, such as delivery robots and mobility scooters, emerges as a promising alternative to vehicular mobility. Current micromobility depends mostly on human manual operation (in-person or remote control), which raises safety and efficiency concerns when navigating busy urban environments full of unpredictable obstacles and pedestrians. Assisting humans with AI agents in maneuvering micromobility devices presents a viable solution for enhancing safety and efficiency. In this work, we present a scalable urban simulation solution to advance autonomous micromobility. First, we build URBAN-SIM - a high-performance robot learning platform for large-scale training of embodied agents in interactive urban scenes. URBAN-SIM contains three critical modules: Hierarchical Urban Generation pipeline, Interactive Dynamics Generation strategy, and Asynchronous Scene Sampling scheme, to improve the diversity, realism, and efficiency of robot learning in simulation. Then, we propose URBAN-BENCH - a suite of essential tasks and benchmarks to gauge various capabilities of the AI agents in achieving autonomous micromobility. URBAN-BENCH includes eight tasks based on three core skills of the agents: Urban Locomotion, Urban Navigation, and Urban Traverse. We evaluate four robots with heterogeneous embodiments, such as the wheeled and legged robots, across these tasks. Experiments on diverse terrains and urban structures reveal each robot's strengths and limitations.
Related papers
- MobileCity: An Efficient Framework for Large-Scale Urban Behavior Simulation [22.340422693575547]
We present a virtual city that features multiple functional buildings and transportation modes.<n>We then conduct extensive surveys to model behavioral choices and mobility preferences among population groups.<n>We introduce a simulation framework that captures the complexity of urban mobility while remaining scalable, enabling the simulation of over 4,000 agents.
arXiv Detail & Related papers (2025-04-18T07:01:05Z) - VertiFormer: A Data-Efficient Multi-Task Transformer for Off-Road Robot Mobility [49.512339092493384]
VertiFormer is a novel data-efficient multi-task Transformer model trained with only one hour of data.<n>Our experiments offer insights into effectively utilizing Transformers for off-road robot mobility with limited data.
arXiv Detail & Related papers (2025-02-01T20:21:00Z) - MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility [52.0930915607703]
Recent advances in Robotics and Embodied AI make public urban spaces no longer exclusive to humans.
Micromobility enabled by AI for short-distance travel in public urban spaces plays a crucial component in the future transportation system.
We present MetaUrban, a compositional simulation platform for the AI-driven urban micromobility research.
arXiv Detail & Related papers (2024-07-11T17:56:49Z) - DrEureka: Language Model Guided Sim-To-Real Transfer [64.14314476811806]
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale.
In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design.
Our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball.
arXiv Detail & Related papers (2024-06-04T04:53:05Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - MTAC: Hierarchical Reinforcement Learning-based Multi-gait
Terrain-adaptive Quadruped Controller [12.300578189051963]
Control of quadruped robots in dynamic and rough terrain environments is a challenging problem due to the many degrees of freedom of these robots.
Current locomotion controllers for quadrupeds are limited in their ability to produce multiple adaptive gaits, solve tasks in a time and resource-efficient manner, and require tedious training and manual tuning procedures.
We propose MTAC: a multi-gait terrain-adaptive controller, which utilizes a Hierarchical reinforcement learning (HRL) approach while being time and memory-efficient.
arXiv Detail & Related papers (2023-11-01T18:17:47Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Proficiency Constrained Multi-Agent Reinforcement Learning for
Environment-Adaptive Multi UAV-UGV Teaming [2.745883395089022]
Mixed aerial and ground robot teams are widely used for disaster rescue, social security, precision agriculture, and military missions.
This paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation.
Mix-RL exploits robot capabilities while being aware of the adaptions of robot capabilities to task requirements and environment conditions.
arXiv Detail & Related papers (2020-02-10T16:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.