LROC-PANGU-GAN: Closing the Simulation Gap in Learning Crater
Segmentation with Planetary Simulators
- URL: http://arxiv.org/abs/2310.02781v1
- Date: Wed, 4 Oct 2023 12:52:38 GMT
- Title: LROC-PANGU-GAN: Closing the Simulation Gap in Learning Crater
Segmentation with Planetary Simulators
- Authors: Jaewon La, Jaime Phadke, Matt Hutton, Marius Schwinning, Gabriele De
Canio, Florian Renk, Lars Kunze, Matthew Gadd
- Abstract summary: It is critical for probes landing on foreign planetary bodies to be able to robustly identify and avoid hazards.
Recent applications of deep learning to this problem show promising results.
These models are, however, often learned with explicit supervision over annotated datasets.
This paper introduces a system to close this "realism" gap while retaining label fidelity.
- Score: 5.667566032625522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is critical for probes landing on foreign planetary bodies to be able to
robustly identify and avoid hazards - as, for example, steep cliffs or deep
craters can pose significant risks to a probe's landing and operational
success. Recent applications of deep learning to this problem show promising
results. These models are, however, often learned with explicit supervision
over annotated datasets. These human-labelled crater databases, such as from
the Lunar Reconnaissance Orbiter Camera (LROC), may lack in consistency and
quality, undermining model performance - as incomplete and/or inaccurate labels
introduce noise into the supervisory signal, which encourages the model to
learn incorrect associations and results in the model making unreliable
predictions. Physics-based simulators, such as the Planet and Asteroid Natural
Scene Generation Utility, have, in contrast, perfect ground truth, as the
internal state that they use to render scenes is known with exactness. However,
they introduce a serious simulation-to-real domain gap - because of fundamental
differences between the simulated environment and the real-world arising from
modelling assumptions, unaccounted for physical interactions, environmental
variability, etc. Therefore, models trained on their outputs suffer when
deployed in the face of realism they have not encountered in their training
data distributions. In this paper, we therefore introduce a system to close
this "realism" gap while retaining label fidelity. We train a CycleGAN model to
synthesise LROC from Planet and Asteroid Natural Scene Generation Utility
(PANGU) images. We show that these improve the training of a downstream crater
segmentation network, with segmentation performance on a test set of real LROC
images improved as compared to using only simulated PANGU images.
Related papers
- Bench2Drive-R: Turning Real World Data into Reactive Closed-Loop Autonomous Driving Benchmark by Generative Model [63.336123527432136]
We introduce Bench2Drive-R, a generative framework that enables reactive closed-loop evaluation.
Unlike existing video generative models for autonomous driving, the proposed designs are tailored for interactive simulation.
We compare the generation quality of Bench2Drive-R with existing generative models and achieve state-of-the-art performance.
arXiv Detail & Related papers (2024-12-11T06:35:18Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - SPARTAN: A Sparse Transformer Learning Local Causation [63.29645501232935]
Causal structures play a central role in world models that flexibly adapt to changes in the environment.
We present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene.
By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states.
arXiv Detail & Related papers (2024-11-11T11:42:48Z) - Are NeRFs ready for autonomous driving? Towards closing the real-to-simulation gap [6.393953433174051]
We propose a novel perspective for addressing the real-to-simulated data gap.
We conduct the first large-scale investigation into the real-to-simulated data gap in an autonomous driving setting.
Our results show notable improvements in model robustness to simulated data, even improving real-world performance in some cases.
arXiv Detail & Related papers (2024-03-24T11:09:41Z) - RANRAC: Robust Neural Scene Representations via Random Ray Consensus [12.161889666145127]
RANdom RAy Consensus (RANRAC) is an efficient approach to eliminate the effect of inconsistent data.
We formulate a fuzzy adaption of the RANSAC paradigm, enabling its application to large scale models.
Results indicate significant improvements compared to state-of-the-art robust methods for novel-view synthesis.
arXiv Detail & Related papers (2023-12-15T13:33:09Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - PCGen: Point Cloud Generator for LiDAR Simulation [10.692184635629792]
Existing methods generate data which are more noisy and complete than the real point clouds.
We propose FPA raycasting and surrogate model raydrop.
With minimal training data, the surrogate model can generalize to different geographies and scenes.
Results show that object detection models trained by simulation data can achieve similar result as the real data trained model.
arXiv Detail & Related papers (2022-10-17T04:13:21Z) - Bridging the Gap to Real-World Object-Centric Learning [66.55867830853803]
We show that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way.
Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data.
arXiv Detail & Related papers (2022-09-29T15:24:47Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.