Convolutional Neural Processes for Inpainting Satellite Images
- URL: http://arxiv.org/abs/2205.12407v1
- Date: Tue, 24 May 2022 23:29:04 GMT
- Title: Convolutional Neural Processes for Inpainting Satellite Images
- Authors: Alexander Pondaven, M\"art Bakler, Donghu Guo, Hamzah Hashim, Martin
Ignatov, Harrison Zhu
- Abstract summary: Inpainting involves predicting what is missing based on the known pixels and is an old problem in image processing.
We show ConvvNPs can outperform classical methods and state-of-the-art deep learning inpainting models on a scanline inpainting problem for LANDSAT 7 satellite images.
- Score: 56.032183666893246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread availability of satellite images has allowed researchers to
model complex systems such as disease dynamics. However, many satellite images
have missing values due to measurement defects, which render them unusable
without data imputation. For example, the scanline corrector for the LANDSAT 7
satellite broke down in 2003, resulting in a loss of around 20\% of its data.
Inpainting involves predicting what is missing based on the known pixels and is
an old problem in image processing, classically based on PDEs or interpolation
methods, but recent deep learning approaches have shown promise. However, many
of these methods do not explicitly take into account the inherent
spatiotemporal structure of satellite images. In this work, we cast satellite
image inpainting as a natural meta-learning problem, and propose using
convolutional neural processes (ConvNPs) where we frame each satellite image as
its own task or 2D regression problem. We show ConvNPs can outperform classical
methods and state-of-the-art deep learning inpainting models on a scanline
inpainting problem for LANDSAT 7 satellite images, assessed on a variety of in
and out-of-distribution images.
Related papers
- Weakly-supervised Camera Localization by Ground-to-satellite Image Registration [52.54992898069471]
We propose a weakly supervised learning strategy for ground-to-satellite image registration.
It derives positive and negative satellite images for each ground image.
We also propose a self-supervision strategy for cross-view image relative rotation estimation.
arXiv Detail & Related papers (2024-09-10T12:57:16Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring [48.80983873199214]
We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
arXiv Detail & Related papers (2023-08-10T12:53:30Z) - Zero shot framework for satellite image restoration [25.163783640750573]
We propose a distortion disentanglement and knowledge distillation framework for satellite image restoration.
Our algorithm requires only two images: the distorted satellite image to be restored and a reference image with similar semantics.
arXiv Detail & Related papers (2023-06-05T14:34:58Z) - T-former: An Efficient Transformer for Image Inpainting [50.43302925662507]
A class of attention-based network architectures, called transformer, has shown significant performance on natural language processing fields.
In this paper, we design a novel attention linearly related to the resolution according to Taylor expansion, and based on this attention, a network called $T$-former is designed for image inpainting.
Experiments on several benchmark datasets demonstrate that our proposed method achieves state-of-the-art accuracy while maintaining a relatively low number of parameters and computational complexity.
arXiv Detail & Related papers (2023-05-12T04:10:42Z) - Evaluation of Pre-Trained CNN Models for Geographic Fake Image Detection [20.41074415307636]
We are witnessing the emergence of fake satellite images, which can be misleading or even threatening to national security.
We explore the suitability of several convolutional neural network (CNN) architectures for fake satellite image detection.
This work allows the establishment of new baselines and may be useful for the development of CNN-based methods for fake satellite image detection.
arXiv Detail & Related papers (2022-10-01T20:37:24Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Seamless Satellite-image Synthesis [1.3401746329218014]
While 2D data is cheap and easily, accurate satellite imagery is expensive and often unavailable or out of date date.
Our approach seamless textures over arbitrarily extents which are consistent through scale-space.
arXiv Detail & Related papers (2021-11-05T10:42:24Z) - Semantic Segmentation of Medium-Resolution Satellite Imagery using
Conditional Generative Adversarial Networks [3.4797121357690153]
We propose Conditional Generative Adversarial Networks (CGAN) based approach of image-to-image translation for high-resolution satellite imagery.
We find that the CGAN model outperforms the CNN model of similar complexity by a significant margin on an unseen imbalanced test dataset.
arXiv Detail & Related papers (2020-12-05T18:18:45Z) - The color out of space: learning self-supervised representations for
Earth Observation imagery [10.019106184219515]
We propose to learn meaningful representations from satellite imagery, leveraging its high-dimensionality spectral bands to reconstruct visible colors.
We conduct experiments on land cover classification (BigEarthNet) and West Nile Virus detection, showing that colorization is a solid pretext task for training a feature extractor.
arXiv Detail & Related papers (2020-06-22T10:21:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.