Big Plastic Masses Detection using Sentinel 2 Images
- URL: http://arxiv.org/abs/2103.09560v1
- Date: Wed, 17 Mar 2021 10:45:33 GMT
- Title: Big Plastic Masses Detection using Sentinel 2 Images
- Authors: Fernando Martin-Rodriguez
- Abstract summary: This communication describes a preliminary research on detection of big masses of plastic (marine litter) on the oceans and seas using EO (Earth Observation) satellite systems.
- Score: 91.3755431537592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This communication describes a preliminary research on detection of big
masses of plastic (marine litter) on the oceans and seas using EO (Earth
Observation) satellite systems. Free images from the Sentinel 2 (Copernicus
Project) platform are used. To develop a plastic recognizer, we start with an
image where we can find a big accumulation of "nonfloating" plastic: Almer\'ia
greenhouses. We made a test using remote sensing differential indexes, but we
got much better results using all available wavelengths (thirteen frequency
bands) and applying Neural Networks to that feature vector.
Related papers
- Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - EgoLocate: Real-time Motion Capture, Localization, and Mapping with
Sparse Body-mounted Sensors [74.1275051763006]
We develop a system that simultaneously performs human motion capture (mocap), localization, and mapping in real time from sparse body-mounted sensors.
Our technique is largely improved by our technique, compared with the state of the art of the two fields.
arXiv Detail & Related papers (2023-05-02T16:56:53Z) - Deep Learning Models for River Classification at Sub-Meter Resolutions
from Multispectral and Panchromatic Commercial Satellite Imagery [2.121978045345352]
This study focuses on rivers in the Arctic, using images from the Quickbird, WorldView, and GeoEye satellites.
We use the RGB, and NIR bands of the 8-band multispectral sensors. Those trained models all achieve excellent precision and recall over 90% on validation data, aided by on-the-fly preprocessing of the training data specific to satellite imagery.
In a novel approach, we then use results from the multispectral model to generate training data for FCN that only require panchromatic imagery, of which considerably more is available.
arXiv Detail & Related papers (2022-12-27T20:56:34Z) - Towards Transformer-based Homogenization of Satellite Imagery for
Landsat-8 and Sentinel-2 [1.4699455652461728]
Landsat-8 (NASA) and Sentinel-2 (ESA) are two prominent multi-spectral imaging satellite projects that provide publicly available data.
This work provides a first glance at the possibility of using a transformer-based model to reduce the spectral and spatial differences between observations from both satellite projects.
arXiv Detail & Related papers (2022-10-14T09:13:34Z) - NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance
Fields [54.27264716713327]
We show that a Neural Radiance Fields (NeRF) representation of a scene can be used to train dense object descriptors.
We use an optimized NeRF to extract dense correspondences between multiple views of an object, and then use these correspondences as training data for learning a view-invariant representation of the object.
Dense correspondence models supervised with our method significantly outperform off-the-shelf learned descriptors by 106%.
arXiv Detail & Related papers (2022-03-03T18:49:57Z) - Multi-Label Classification on Remote-Sensing Images [0.0]
This report aims to label the satellite image chips of the Amazon rainforest with atmospheric and various classes of land cover or land use through different machine learning and superior deep learning models.
Our best score was achieved so far with the F2 metric is 0.927.
arXiv Detail & Related papers (2022-01-06T08:42:32Z) - Processing Images from Multiple IACTs in the TAIGA Experiment with
Convolutional Neural Networks [62.997667081978825]
We use convolutional neural networks (CNNs) to analyze Monte Carlo-simulated images from the TAIGA experiment.
The analysis includes selection of the images corresponding to the showers caused by gamma rays and estimating the energy of the gamma rays.
arXiv Detail & Related papers (2021-12-31T10:49:11Z) - Automated System for Ship Detection from Medium Resolution Satellite
Optical Imagery [3.190574537106449]
We present a ship detection pipeline for low-cost medium resolution satellite optical imagery obtained from ESA Sentinel-2 and Planet Labs Dove constellations.
This optical satellite imagery is readily available for any place on Earth and underutilized in the maritime domain, compared to existing solutions based on synthetic-aperture radar (SAR) imagery.
arXiv Detail & Related papers (2021-04-28T15:06:18Z) - Generating Synthetic Multispectral Satellite Imagery from Sentinel-2 [3.4797121357690153]
We propose a generative model to produce multi-resolution multi-spectral imagery based on Sentinel-2 data.
The resulting synthetic images are indistinguishable from real ones by humans.
arXiv Detail & Related papers (2020-12-05T19:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.