Synthetic Data for Robust Runway Detection
- URL: http://arxiv.org/abs/2510.20349v1
- Date: Thu, 23 Oct 2025 08:48:37 GMT
- Title: Synthetic Data for Robust Runway Detection
- Authors: Estelle Chigot, Dennis G. Wilson, Meriem Ghrib, Fabrice Jimenez, Thomas Oberlin,
- Abstract summary: We propose an image generation approach based on a commercial flight simulator that complements a few annotated real images.<n>By controlling the image generation and the integration of real and synthetic data, we show that standard object detection models can achieve accurate prediction.
- Score: 3.4536142947507478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep vision models are now mature enough to be integrated in industrial and possibly critical applications such as autonomous navigation. Yet, data collection and labeling to train such models requires too much efforts and costs for a single company or product. This drawback is more significant in critical applications, where training data must include all possible conditions including rare scenarios. In this perspective, generating synthetic images is an appealing solution, since it allows a cheap yet reliable covering of all the conditions and environments, if the impact of the synthetic-to-real distribution shift is mitigated. In this article, we consider the case of runway detection that is a critical part in autonomous landing systems developed by aircraft manufacturers. We propose an image generation approach based on a commercial flight simulator that complements a few annotated real images. By controlling the image generation and the integration of real and synthetic data, we show that standard object detection models can achieve accurate prediction. We also evaluate their robustness with respect to adverse conditions, in our case nighttime images, that were not represented in the real data, and show the interest of using a customized domain adaptation strategy.
Related papers
- Mirage2Matter: A Physically Grounded Gaussian World Model from Video [87.9732484393686]
We present Simulate Anything, a graphics-driven world modeling and simulation framework.<n>Our approach reconstructs real-world environments into a photorealistic scene representation using 3D Gaussian Splatting (3DGS)<n>We then leverage generative models to recover a physically realistic representation and integrate it into a simulation environment via a precision calibration target.
arXiv Detail & Related papers (2026-01-24T07:43:57Z) - A Synthetic Dataset for Manometry Recognition in Robotic Applications [0.686108371431346]
We propose a hybrid data synthesis pipeline that integrates procedural rendering and AI-driven video generation.<n>A YOLO-based detector trained on a composite dataset, combining real and synthetic data, outperformed models trained solely on real images.
arXiv Detail & Related papers (2025-08-24T17:52:13Z) - Synthetic Similarity Search in Automotive Production [0.4499833362998487]
We propose a novel image classification pipeline that combines similarity search using a vision-based foundation model with synthetic data.<n>We evaluate this approach in eight real-world inspection scenarios and demonstrate that it meets the high performance requirements of production environments.
arXiv Detail & Related papers (2025-05-12T06:10:48Z) - Where's the liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content [53.93606081932928]
We introduce a novel black box detection framework that requires only API access.<n>We measure the likelihood that the image was generated by the model itself.<n>For black-box models that do not support masked image inputs, we incorporate a cost efficient surrogate model trained to align with the target model distribution.
arXiv Detail & Related papers (2025-05-02T05:11:35Z) - Fully-Synthetic Training for Visual Quality Inspection in Automotive Production [0.4915744683251149]
We propose a pipeline for generating synthetic images using domain randomization.<n>We evaluate our approach in three real inspection scenarios and demonstrate that an object detection model trained solely on synthetic data can outperform models trained on real images.
arXiv Detail & Related papers (2025-03-12T12:58:30Z) - Drive-1-to-3: Enriching Diffusion Priors for Novel View Synthesis of Real Vehicles [81.29018359825872]
This paper consolidates a set of good practices to finetune large pretrained models for a real-world task.<n>Specifically, we develop several strategies to account for discrepancies between the synthetic data and real driving data.<n>Our insights lead to effective finetuning that results in a $68.8%$ reduction in FID for novel view synthesis over prior arts.
arXiv Detail & Related papers (2024-12-19T03:39:13Z) - ContRail: A Framework for Realistic Railway Image Synthesis using ControlNet [39.58317527488534]
Image Synthesis aims to address the limitation through the design of intelligent models capable of creating original and realistic images.<n>We propose the ContRail framework based on the novel Stable Diffusion model ControlNet.<n>We experiment with the task of synthetic railway image generation, where we improve the performance in rail-specific tasks.
arXiv Detail & Related papers (2024-12-09T18:34:49Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a synthetic dataset for novel driving view synthesis evaluation.<n>It includes testing images captured by deviating from the training trajectory by $1-4$ meters.<n>We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multicamera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - LARD - Landing Approach Runway Detection -- Dataset for Vision Based
Landing [2.7400353551392853]
We present a dataset of high-quality aerial images for the task of runway detection during approach and landing phases.
Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages.
This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks.
arXiv Detail & Related papers (2023-04-05T08:25:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.