The color out of space: learning self-supervised representations for
Earth Observation imagery
- URL: http://arxiv.org/abs/2006.12119v1
- Date: Mon, 22 Jun 2020 10:21:36 GMT
- Title: The color out of space: learning self-supervised representations for
Earth Observation imagery
- Authors: Stefano Vincenzi, Angelo Porrello, Pietro Buzzega, Marco Cipriano,
Pietro Fronte, Roberto Cuccu, Carla Ippoliti, Annamaria Conte, Simone
Calderara
- Abstract summary: We propose to learn meaningful representations from satellite imagery, leveraging its high-dimensionality spectral bands to reconstruct visible colors.
We conduct experiments on land cover classification (BigEarthNet) and West Nile Virus detection, showing that colorization is a solid pretext task for training a feature extractor.
- Score: 10.019106184219515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent growth in the number of satellite images fosters the development
of effective deep-learning techniques for Remote Sensing (RS). However, their
full potential is untapped due to the lack of large annotated datasets. Such a
problem is usually countered by fine-tuning a feature extractor that is
previously trained on the ImageNet dataset. Unfortunately, the domain of
natural images differs from the RS one, which hinders the final performance. In
this work, we propose to learn meaningful representations from satellite
imagery, leveraging its high-dimensionality spectral bands to reconstruct the
visible colors. We conduct experiments on land cover classification
(BigEarthNet) and West Nile Virus detection, showing that colorization is a
solid pretext task for training a feature extractor. Furthermore, we
qualitatively observe that guesses based on natural images and colorization
rely on different parts of the input. This paves the way to an ensemble model
that eventually outperforms both the above-mentioned techniques.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Deep Metric Color Embeddings for Splicing Localization in Severely
Degraded Images [10.091921099426294]
We explore an alternative approach to splicing detection, which is potentially better suited for images in-the-wild.
We learn a deep metric space that is on one hand sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
In our evaluation, we show that the proposed embedding space outperforms the state of the art on images that have been subject to strong compression and downsampling.
arXiv Detail & Related papers (2022-06-21T21:28:40Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Influence of Color Spaces for Deep Learning Image Colorization [2.3705923859070217]
Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc.
In this chapter, we aim to study their influence on the results obtained by training a deep neural network.
We compare the results obtained with the same deep neural network architecture with RGB, YUV and Lab color spaces.
arXiv Detail & Related papers (2022-04-06T14:14:07Z) - Astronomical Image Colorization and upscaling with Generative
Adversarial Networks [0.0]
This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images.
We explore the usage of various models in two different color spaces, RGB and L*a*b.
The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image.
arXiv Detail & Related papers (2021-12-27T19:01:20Z) - Generating Compositional Color Representations from Text [3.141061579698638]
Motivated by the fact that a significant fraction of user queries on an image search engine follow an (attribute, object) structure, we propose a generative adversarial network that generates color profiles for such bigrams.
We design our pipeline to learn composition - the ability to combine seen attributes and objects to unseen pairs.
arXiv Detail & Related papers (2021-09-22T01:37:13Z) - Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning [52.07698484363237]
We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
arXiv Detail & Related papers (2021-04-05T13:04:44Z) - Is It a Plausible Colour? UCapsNet for Image Colourisation [38.88087332284959]
We introduce a novel architecture for colourisation of grayscale images.
The architecture is based on Capsules trained following the adversarial learning paradigm.
We show that our approach is able to generate more vibrant and plausible colours than exiting solutions.
arXiv Detail & Related papers (2020-12-04T09:07:13Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.