Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware
- URL: http://arxiv.org/abs/2406.11319v1
- Date: Mon, 17 Jun 2024 08:36:12 GMT
- Title: Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware
- Authors: Gregor Lenz, Douglas McLelland,
- Abstract summary: On-board data processing can identify ships and reduce the amount of data sent to the ground.
Most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships.
We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model.
- Score: 1.4330085996657045
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1\% of images containing ships. We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip's Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9\%, which increases to 79.3\% when evaluated solely on images containing ships, by reducing false positives. Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.
Related papers
- Deep Transformer Network for Monocular Pose Estimation of Ship-Based UAV [0.23408308015481663]
A Transformer Neural Network model is trained to detect 2D keypoints and estimate the 6D pose of each part.
The method has potential applications for ship-based autonomous UAV landing and navigation.
arXiv Detail & Related papers (2024-06-13T16:01:22Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Weakly-Supervised Semantic Segmentation of Ships Using Thermal Imagery [9.01037793978146]
Unmanned Aerial Vehicles (UAVs) equipped with infrared cameras and deep-learning based algorithms represent an efficient alternative for identifying and segmenting objects of interest.
Standard approaches to training these algorithms require large-scale datasets of densely labeled infrared maritime images.
In this work we demonstrate that, in the context of segmenting ships in infrared imagery, weakly-supervising an algorithm with sparsely labeled data can drastically reduce data labeling costs.
arXiv Detail & Related papers (2022-12-26T14:20:32Z) - Fewer is More: Efficient Object Detection in Large Aerial Images [59.683235514193505]
This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results.
Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets.
We extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively.
arXiv Detail & Related papers (2022-12-26T12:49:47Z) - Optimizing ship detection efficiency in SAR images [12.829941550630776]
The speed and compute cost of vessel detection are essential for a timely intervention to prevent illegal fishing.
We trained an object detection model based on a convolutional neural network (CNN) using a dataset of satellite images.
We show that by using a classification model the average precision of the detection model can be approximated to 99.5% in 44% of the time or to 92.7% in 25% of the time.
arXiv Detail & Related papers (2022-12-12T12:04:10Z) - MTU-Net: Multi-level TransUNet for Space-based Infrared Tiny Ship
Detection [42.92798053154314]
We develop a space-based infrared tiny ship detection dataset (namely, NUDT-SIRST-Sea) with 48 space-based infrared images and 17598 pixel-level tiny ship annotations.
Considering the extreme characteristics of those tiny ships in such challenging scenes, we propose a multi-level TransUNet (MTU-Net) in this paper.
Experimental results on the NUDT-SIRST-Sea dataset show that our MTU-Net outperforms traditional and existing deep learning based SIRST methods in terms of probability of detection, false alarm rate and intersection over union.
arXiv Detail & Related papers (2022-09-28T00:48:14Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - SALISA: Saliency-based Input Sampling for Efficient Video Object
Detection [58.22508131162269]
We propose SALISA, a novel non-uniform SALiency-based Input SAmpling technique for video object detection.
We show that SALISA significantly improves the detection of small objects.
arXiv Detail & Related papers (2022-04-05T17:59:51Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - Real-Time target detection in maritime scenarios based on YOLOv3 model [65.35132992156942]
A novel ships dataset is proposed consisting of more than 56k images of marine vessels collected by means of web-scraping.
A YOLOv3 single-stage detector based on Keras API is built on top of this dataset.
arXiv Detail & Related papers (2020-02-10T15:25:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.