RSINet: Inpainting Remotely Sensed Images Using Triple GAN Framework
- URL: http://arxiv.org/abs/2202.05988v1
- Date: Sat, 12 Feb 2022 05:19:37 GMT
- Title: RSINet: Inpainting Remotely Sensed Images Using Triple GAN Framework
- Authors: Advait Kumar, Dipesh Tamboli, Shivam Pande, Biplab Banerjee
- Abstract summary: We propose a novel inpainting method that individually focuses on each aspect of an image such as edges, colour and texture.
Each individual GAN also incorporates the attention mechanism that explicitly extracts the spectral and spatial features.
We evaluate our model, alongwith previous state of the art models, on the two well known remote sensing datasets, Open Cities AI and Earth on Canvas.
- Score: 13.613245876782367
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the problem of image inpainting in the remote sensing domain.
Remote sensing images possess high resolution and geographical variations, that
render the conventional inpainting methods less effective. This further entails
the requirement of models with high complexity to sufficiently capture the
spectral, spatial and textural nuances within an image, emerging from its high
spatial variability. To this end, we propose a novel inpainting method that
individually focuses on each aspect of an image such as edges, colour and
texture using a task specific GAN. Moreover, each individual GAN also
incorporates the attention mechanism that explicitly extracts the spectral and
spatial features. To ensure consistent gradient flow, the model uses residual
learning paradigm, thus simultaneously working with high and low level
features. We evaluate our model, alongwith previous state of the art models, on
the two well known remote sensing datasets, Open Cities AI and Earth on Canvas,
and achieve competitive performance.
Related papers
- Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening [2.874893537471256]
Unfolding fusion methods integrate the powerful representation capabilities of deep learning with the robustness of model-based approaches.
In this paper, we propose a model-based deep unfolded method for satellite image fusion.
Experimental results on PRISMA, Quickbird, and WorldView2 datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-09-04T13:05:00Z) - HI-GAN: Hierarchical Inpainting GAN with Auxiliary Inputs for Combined
RGB and Depth Inpainting [3.736916304884176]
Inpainting involves filling in missing pixels or areas in an image.
Existing methods rely on digital replacement techniques which necessitate multiple cameras and incur high costs.
We propose Hierarchical Inpainting GAN (HI-GAN), a novel approach comprising three GANs in a hierarchical fashion for RGBD inpainting.
arXiv Detail & Related papers (2024-02-15T21:43:56Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Astronomical Image Colorization and upscaling with Generative
Adversarial Networks [0.0]
This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images.
We explore the usage of various models in two different color spaces, RGB and L*a*b.
The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image.
arXiv Detail & Related papers (2021-12-27T19:01:20Z) - Spatial-Separated Curve Rendering Network for Efficient and
High-Resolution Image Harmonization [59.19214040221055]
We propose a novel spatial-separated curve rendering network (S$2$CRNet) for efficient and high-resolution image harmonization.
The proposed method reduces more than 90% parameters compared with previous methods.
Our method can work smoothly on higher resolution images in real-time which is more than 10$times$ faster than the existing methods.
arXiv Detail & Related papers (2021-09-13T07:20:16Z) - Aggregated Contextual Transformations for High-Resolution Image
Inpainting [57.241749273816374]
We propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN) for high-resolution image inpainting.
To enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block.
For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task.
arXiv Detail & Related papers (2021-04-03T15:50:17Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.