Single Image Cloud Detection via Multi-Image Fusion
- URL: http://arxiv.org/abs/2007.15144v1
- Date: Wed, 29 Jul 2020 22:52:28 GMT
- Title: Single Image Cloud Detection via Multi-Image Fusion
- Authors: Scott Workman, M. Usman Rafique, Hunter Blanton, Connor Greenwell,
Nathan Jacobs
- Abstract summary: A primary challenge in developing algorithms is the cost of collecting annotated training data.
We demonstrate how recent advances in multi-image fusion can be leveraged to bootstrap single image cloud detection.
We collect a large dataset of Sentinel-2 images along with a per-pixel semantic labelling for land cover.
- Score: 23.641624507709274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artifacts in imagery captured by remote sensing, such as clouds, snow, and
shadows, present challenges for various tasks, including semantic segmentation
and object detection. A primary challenge in developing algorithms for
identifying such artifacts is the cost of collecting annotated training data.
In this work, we explore how recent advances in multi-image fusion can be
leveraged to bootstrap single image cloud detection. We demonstrate that a
network optimized to estimate image quality also implicitly learns to detect
clouds. To support the training and evaluation of our approach, we collect a
large dataset of Sentinel-2 images along with a per-pixel semantic labelling
for land cover. Through various experiments, we demonstrate that our method
reduces the need for annotated training data and improves cloud detection
performance.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - BenchCloudVision: A Benchmark Analysis of Deep Learning Approaches for
Cloud Detection and Segmentation in Remote Sensing Imagery [0.0]
This paper examines seven cutting-edge semantic segmentation and detection algorithms applied to clouds identification.
To increase the model's adaptability, critical elements including the type of imagery and the amount of spectral bands used during training are analyzed.
Research tries to produce machine learning algorithms that can perform cloud segmentation using only a few spectral bands.
arXiv Detail & Related papers (2024-02-21T16:32:43Z) - Learning to detect cloud and snow in remote sensing images from noisy
labels [26.61590605351686]
The complexity of scenes and the diversity of cloud types in remote sensing images result in many inaccurate labels.
This paper is the first to consider the impact of label noise on the detection of clouds and snow in remote sensing images.
arXiv Detail & Related papers (2024-01-17T03:02:31Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Cross-modal Learning for Image-Guided Point Cloud Shape Completion [23.779985842891705]
We show how it is possible to combine the information from the two modalities in a localized latent space.
We also investigate a novel weakly-supervised setting where the auxiliary image provides a supervisory signal.
Experiments show significant improvements over state-of-the-art supervised methods for both unimodal and multimodal completion.
arXiv Detail & Related papers (2022-09-20T08:37:05Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Improving Object Detection with Selective Self-supervised Self-training [62.792445237541145]
We study how to leverage Web images to augment human-curated object detection datasets.
We retrieve Web images by image-to-image search, which incurs less domain shift from the curated data than other search methods.
We propose a novel learning method motivated by two parallel lines of work that explore unlabeled data for image classification.
arXiv Detail & Related papers (2020-07-17T18:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.