xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture
Radar Imagery
- URL: http://arxiv.org/abs/2206.00897v4
- Date: Sat, 5 Nov 2022 09:53:31 GMT
- Title: xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture
Radar Imagery
- Authors: Fernando Paolo, Tsu-ting Tim Lin, Ritwik Gupta, Bryce Goodman, Nirav
Patel, Daniel Kuster, David Kroodsma, Jared Dunnmon
- Abstract summary: Unsustainable fishing practices worldwide pose a major threat to marine resources and ecosystems.
It is now possible to automate detection of dark vessels day or night, under all-weather conditions.
xView3-SAR consists of nearly 1,000 analysis-ready SAR images from the Sentinel-1 mission.
- Score: 52.67592123500567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsustainable fishing practices worldwide pose a major threat to marine
resources and ecosystems. Identifying vessels that do not show up in
conventional monitoring systems -- known as ``dark vessels'' -- is key to
managing and securing the health of marine environments. With the rise of
satellite-based synthetic aperture radar (SAR) imaging and modern machine
learning (ML), it is now possible to automate detection of dark vessels day or
night, under all-weather conditions. SAR images, however, require a
domain-specific treatment and are not widely accessible to the ML community.
Maritime objects (vessels and offshore infrastructure) are relatively small and
sparse, challenging traditional computer vision approaches. We present the
largest labeled dataset for training ML models to detect and characterize
vessels and ocean structures in SAR imagery. xView3-SAR consists of nearly
1,000 analysis-ready SAR images from the Sentinel-1 mission that are, on
average, 29,400-by-24,400 pixels each. The images are annotated using a
combination of automated and manual analysis. Co-located bathymetry and wind
state rasters accompany every SAR image. We also provide an overview of the
xView3 Computer Vision Challenge, an international competition using xView3-SAR
for ship detection and characterization at large scale. We release the data
(\href{https://iuu.xview.us/}{https://iuu.xview.us/}) and code
(\href{https://github.com/DIUx-xView}{https://github.com/DIUx-xView}) to
support ongoing development and evaluation of ML approaches for this important
application.
Related papers
- GaussNav: Gaussian Splatting for Visual Navigation [92.13664084464514]
Instance ImageGoal Navigation (IIN) requires an agent to locate a specific object depicted in a goal image within an unexplored environment.
Our framework constructs a novel map representation based on 3D Gaussian Splatting (3DGS)
Our framework demonstrates a significant leap in performance, evidenced by an increase in Success weighted by Path Length (SPL) from 0.252 to 0.578 on the challenging Habitat-Matterport 3D (HM3D) dataset.
arXiv Detail & Related papers (2024-03-18T09:56:48Z) - SeaDroneSim: Simulation of Aerial Images for Detection of Objects Above
Water [4.625920569634467]
Unmanned Aerial Vehicles (UAVs) are known for their fast and versatile applicability.
We present a new benchmark suite, textittextbfSeaDroneSim, that can be used to create photo-realistic aerial image datasets.
We obtain 71 mAP on real aerial images for detecting BlueROV as a feasibility study.
arXiv Detail & Related papers (2022-10-26T21:50:50Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [68.8204255655161]
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
arXiv Detail & Related papers (2021-04-29T22:30:29Z) - Automated System for Ship Detection from Medium Resolution Satellite
Optical Imagery [3.190574537106449]
We present a ship detection pipeline for low-cost medium resolution satellite optical imagery obtained from ESA Sentinel-2 and Planet Labs Dove constellations.
This optical satellite imagery is readily available for any place on Earth and underutilized in the maritime domain, compared to existing solutions based on synthetic-aperture radar (SAR) imagery.
arXiv Detail & Related papers (2021-04-28T15:06:18Z) - Visualization of Deep Transfer Learning In SAR Imagery [0.0]
We consider transfer learning to leverage deep features from a network trained on an EO ships dataset.
By exploring the network activations in the form of class-activation maps, we gain insight on how a deep network interprets a new modality.
arXiv Detail & Related papers (2021-03-20T00:16:15Z) - The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion [14.45289690639374]
We publish the QXS-SAROPT dataset to foster deep learning research in SAR-optical data fusion.
We show exemplary results for two representative applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images.
arXiv Detail & Related papers (2021-03-15T10:22:46Z) - Boosting ship detection in SAR images with complementary pretraining
techniques [14.34438598597809]
We propose an optical ship detector (OSD) pretraining technique, which transfers the characteristics of ships in earth observations to SAR images from a large-scale aerial image dataset.
We also propose an optical-SAR matching (OSM) pretraining technique, which transfers plentiful texture features from optical images to SAR images by common representation learning.
The proposed method won the sixth place of ship detection in SAR images in 2020 Gaofen challenge.
arXiv Detail & Related papers (2021-03-15T10:03:04Z) - X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for
Classification of Remote Sensing Data [69.37597254841052]
We propose a novel cross-modal deep-learning framework called X-ModalNet.
X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network.
We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.
arXiv Detail & Related papers (2020-06-24T15:29:41Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.