Semantic Neural Radiance Fields for Multi-Date Satellite Data
- URL: http://arxiv.org/abs/2502.16992v1
- Date: Mon, 24 Feb 2025 09:26:48 GMT
- Title: Semantic Neural Radiance Fields for Multi-Date Satellite Data
- Authors: Valentin Wagner, Sebastian Bullinger, Christoph Bodensteiner, Michael Arens,
- Abstract summary: We propose a satellite specific Neural Radiance Fields (NeRF) model capable to obtain a three-dimensional semantic representation of the scene.<n>The model derives the output from a set of multi-date satellite images with corresponding pixel-wise semantic labels.<n>We enhance the color prediction by utilizing the semantic information to address temporal image inconsistencies.
- Score: 4.174845397893041
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work we propose a satellite specific Neural Radiance Fields (NeRF) model capable to obtain a three-dimensional semantic representation (neural semantic field) of the scene. The model derives the output from a set of multi-date satellite images with corresponding pixel-wise semantic labels. We demonstrate the robustness of our approach and its capability to improve noisy input labels. We enhance the color prediction by utilizing the semantic information to address temporal image inconsistencies caused by non-stationary categories such as vehicles. To facilitate further research in this domain, we present a dataset comprising manually generated labels for popular multi-view satellite images. Our code and dataset are available at https://github.com/wagnva/semantic-nerf-for-satellite-data.
Related papers
- SatDepth: A Novel Dataset for Satellite Image Matching [0.0]
We present SatDepth'', a novel dataset that provides dense ground-truth correspondences for training image matching frameworks for satellite images.
We benchmark four existing image matching frameworks using our dataset and carry out an ablation study that confirms that the models trained with our dataset with rotation augmentation outperform (up to 40% increase in precision) the models trained with other datasets.
arXiv Detail & Related papers (2025-03-17T00:14:13Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Semantics from Space: Satellite-Guided Thermal Semantic Segmentation Annotation for Aerial Field Robots [8.265009823753982]
We present a new method to automatically generate semantic segmentation annotations for thermal imagery captured from an aerial vehicle.
This new capability overcomes the challenge of developing thermal semantic perception algorithms for field robots.
Our approach can produce highly-precise semantic segmentation labels using low-resolution satellite land cover data for little-to-no cost.
arXiv Detail & Related papers (2024-03-21T00:59:35Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - SatMAE: Pre-training Transformers for Temporal and Multi-Spectral
Satellite Imagery [74.82821342249039]
We present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE)
To leverage temporal information, we include a temporal embedding along with independently masking image patches across time.
arXiv Detail & Related papers (2022-07-17T01:35:29Z) - Semantic Image Synthesis via Diffusion Models [174.24523061460704]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches.
We propose a novel framework based on DDPM for semantic image synthesis.
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Data Generation for Satellite Image Classification Using Self-Supervised
Representation Learning [0.0]
We introduce the self-supervised learning technique to create the synthetic labels for satellite image patches.
These synthetic labels can be used as the training dataset for the existing supervised learning techniques.
In our experiments, we show that the models trained on the synthetic labels give similar performance to the models trained on the real labels.
arXiv Detail & Related papers (2022-05-28T12:54:34Z) - Controllable Image Synthesis via SegVAE [89.04391680233493]
A semantic map is commonly used intermediate representation for conditional image generation.
In this work, we specifically target at generating semantic maps given a label-set consisting of desired categories.
The proposed framework, SegVAE, synthesizes semantic maps in an iterative manner using conditional variational autoencoder.
arXiv Detail & Related papers (2020-07-16T15:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.