AirSim360: A Panoramic Simulation Platform within Drone View
- URL: http://arxiv.org/abs/2512.02009v1
- Date: Mon, 01 Dec 2025 18:59:30 GMT
- Title: AirSim360: A Panoramic Simulation Platform within Drone View
- Authors: Xian Ge, Yuling Pan, Yuhang Zhang, Xiang Li, Weijun Zhang, Dizhe Zhang, Zhaoliang Wan, Xin Lin, Xiangkai Zhang, Juntao Liang, Jason Li, Wenjie Jiang, Bo Du, Ming-Hsuan Yang, Lu Qi,
- Abstract summary: AirSim360 is a simulation platform for omnidirectional data from aerial viewpoints.<n>AirSim360 focuses on three key aspects: a render-aligned data and labeling paradigm for pixel-level geometric, semantic, and entity-level understanding.<n>Unlike existing simulators, our work is the first to systematically model the 4D real world under an omnidirectional setting.
- Score: 63.238263531772446
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The field of 360-degree omnidirectional understanding has been receiving increasing attention for advancing spatial intelligence. However, the lack of large-scale and diverse data remains a major limitation. In this work, we propose AirSim360, a simulation platform for omnidirectional data from aerial viewpoints, enabling wide-ranging scene sampling with drones. Specifically, AirSim360 focuses on three key aspects: a render-aligned data and labeling paradigm for pixel-level geometric, semantic, and entity-level understanding; an interactive pedestrian-aware system for modeling human behavior; and an automated trajectory generation paradigm to support navigation tasks. Furthermore, we collect more than 60K panoramic samples and conduct extensive experiments across various tasks to demonstrate the effectiveness of our simulator. Unlike existing simulators, our work is the first to systematically model the 4D real world under an omnidirectional setting. The entire platform, including the toolkit, plugins, and collected datasets, will be made publicly available at https://insta360-research-team.github.io/AirSim360-website.
Related papers
- 360Anything: Geometry-Free Lifting of Images and Videos to 360° [51.50120114305155]
Existing approaches rely on explicit geometric alignment between the perspective and the equirectangular projection space.<n>We propose 360Anything, a geometry-free framework built upon pre-trained diffusion transformers.<n>Our approach achieves state-of-the-art performance on both image and video perspective-to-360 generation.
arXiv Detail & Related papers (2026-01-22T18:45:59Z) - GaussGym: An open-source real-to-sim framework for learning locomotion from pixels [78.05453137978132]
We present a novel approach for photorealistic robot simulation that integrates 3D Gaussian Splatting as a drop-in within vectorized physics simulators.<n>This enables unprecedented speed -- exceeding 100,000 steps per second on consumer GPU.<n>We additionally demonstrate its applicability in a sim-to-real robotics setting.
arXiv Detail & Related papers (2025-10-17T06:34:52Z) - AirScape: An Aerial Generative World Model with Motion Controllability [29.696659138543136]
AirScape is the first world model designed for six-degree-of-bodied aerial agents.<n>It predicts future observation based on current visual inputs and motion intentions.
arXiv Detail & Related papers (2025-07-10T16:05:30Z) - Leader360V: The Large-scale, Real-world 360 Video Dataset for Multi-task Learning in Diverse Environment [19.70383859926191]
Leader360V is the first large-scale, labeled real-world 360 video datasets for instance segmentation and tracking.<n>Our datasets enjoy high scene diversity, ranging from indoor and urban settings to natural and dynamic outdoor scenes.<n>Experiments confirm that Leader360V significantly enhances model performance for 360 video segmentation and tracking.
arXiv Detail & Related papers (2025-06-17T07:37:08Z) - TartanGround: A Large-Scale Dataset for Ground Robot Perception and Navigation [19.488886693695946]
TartanGround is a large-scale, multi-modal dataset to advance the perception and autonomy of ground robots.<n>We collect 910 trajectories across 70 environments, resulting in 1.5 million samples.<n>TartanGround can serve as a testbed for training and evaluation of a broad range of learning-based tasks.
arXiv Detail & Related papers (2025-05-15T20:35:06Z) - OpenFly: A Comprehensive Platform for Aerial Vision-Language Navigation [49.697035403548966]
Vision-Language Navigation (VLN) aims to guide agents by leveraging language instructions and visual cues, playing a pivotal role in embodied AI.<n>We propose OpenFly, a platform comprising various rendering engines, a versatile toolchain, and a large-scale benchmark for aerial VLN.<n>We construct a large-scale aerial VLN dataset with 100k trajectories, covering diverse heights and lengths across 18 scenes.
arXiv Detail & Related papers (2025-02-25T09:57:18Z) - HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving [48.84595398410479]
HUGSIM is a closed-loop, photo-realistic, and real-time simulator for evaluating autonomous driving algorithms.<n>We tackle challenges of novel view synthesis in closed-loop scenarios, including viewpoint extrapolation and 360-degree vehicle rendering.<n> HUGSIM offers a comprehensive benchmark across more than 70 sequences from KITTI-360, nuScenes, and PandaSet, along with over 400 varying scenarios.
arXiv Detail & Related papers (2024-12-02T17:07:59Z) - Autonomous Marker-less Rapid Aerial Grasping [5.892028494793913]
We propose a vision-based system for autonomous rapid aerial grasping.
We generate a dense point cloud of the detected objects and perform geometry-based grasp planning.
We show the first use of geometry-based grasping techniques with a flying platform.
arXiv Detail & Related papers (2022-11-23T16:25:49Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.