Optimizing Data Processing in Space for Object Detection in Satellite
Imagery
- URL: http://arxiv.org/abs/2107.03774v1
- Date: Thu, 8 Jul 2021 11:37:24 GMT
- Title: Optimizing Data Processing in Space for Object Detection in Satellite
Imagery
- Authors: Martina Lofqvist, Jos\'e Cano
- Abstract summary: We investigate the performance of CNN-based object detectors on constrained devices by applying different image compression techniques to satellite data.
We take a closer look at object detection networks, including the Single Shot MultiBox Detector (SSD) and Region-based Fully Convolutional Network (R-FCN) models.
The results show that by applying image compression techniques, we are able to improve the execution time and memory consumption, achieving a fully runnable dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is a proliferation in the number of satellites launched each year,
resulting in downlinking of terabytes of data each day. The data received by
ground stations is often unprocessed, making this an expensive process
considering the large data sizes and that not all of the data is useful. This,
coupled with the increasing demand for real-time data processing, has led to a
growing need for on-orbit processing solutions. In this work, we investigate
the performance of CNN-based object detectors on constrained devices by
applying different image compression techniques to satellite data. We examine
the capabilities of the NVIDIA Jetson Nano and NVIDIA Jetson AGX Xavier;
low-power, high-performance computers, with integrated GPUs, small enough to
fit on-board a nanosatellite. We take a closer look at object detection
networks, including the Single Shot MultiBox Detector (SSD) and Region-based
Fully Convolutional Network (R-FCN) models that are pre-trained on DOTA - a
Large Scale Dataset for Object Detection in Aerial Images. The performance is
measured in terms of execution time, memory consumption, and accuracy, and are
compared against a baseline containing a server with two powerful GPUs. The
results show that by applying image compression techniques, we are able to
improve the execution time and memory consumption, achieving a fully runnable
dataset. A lossless compression technique achieves roughly a 10% reduction in
execution time and about a 3% reduction in memory consumption, with no impact
on the accuracy. While a lossy compression technique improves the execution
time by up to 144% and the memory consumption is reduced by as much as 97%.
However, it has a significant impact on accuracy, varying depending on the
compression ratio. Thus the application and ratio of these compression
techniques may differ depending on the required level of accuracy for a
particular task.
Related papers
- Compressing high-resolution data through latent representation encoding for downscaling large-scale AI weather forecast model [10.634513279883913]
We propose a variational autoencoder framework tailored for compressing high-resolution datasets.
Our framework successfully reduced the storage size of 3 years of HRCLDAS data from 8.61 TB to just 204 GB, while preserving essential information.
arXiv Detail & Related papers (2024-10-10T05:38:03Z) - Neural-based Compression Scheme for Solar Image Data [8.374518151411612]
We propose a neural network-based lossy compression method to be used in NASA's data-intensive imagery missions.
In this work, we propose an adversarially trained neural network, equipped with local and non-local attention modules to capture both the local and global structure of the image.
As a proof of concept for use of this algorithm in SDO data analysis, we have performed coronal hole (CH) detection using our compressed images.
arXiv Detail & Related papers (2023-11-06T04:13:58Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - Object Detection performance variation on compressed satellite image
datasets with iquaflow [0.0]
iquaflow is designed to study image quality and model performance variation given an alteration of the image dataset.
We do a showcase study about object detection models adoption on a public image dataset.
arXiv Detail & Related papers (2023-01-14T11:20:27Z) - Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications [78.55896581882595]
Lossy image compression techniques can reduce the quality of the images, leading to accuracy degradation.
In this paper, we analyze the effect of applying low-overhead lossy image compression methods on the accuracy of visual crowd counting.
arXiv Detail & Related papers (2022-07-20T19:20:03Z) - Feature Compression for Rate Constrained Object Detection on the Edge [20.18227104333772]
An emerging approach to solve this problem is to offload the computation of neural networks to computing resources at an edge server.
In this work, we consider a "split computation" system to offload a part of the computation of the YOLO object detection model.
We train the feature compression and decompression module together with the YOLO model to optimize the object detection accuracy under a rate constraint.
arXiv Detail & Related papers (2022-04-15T03:39:30Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - SALISA: Saliency-based Input Sampling for Efficient Video Object
Detection [58.22508131162269]
We propose SALISA, a novel non-uniform SALiency-based Input SAmpling technique for video object detection.
We show that SALISA significantly improves the detection of small objects.
arXiv Detail & Related papers (2022-04-05T17:59:51Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - You Better Look Twice: a new perspective for designing accurate
detectors with reduced computations [56.34005280792013]
BLT-net is a new low-computation two-stage object detection architecture.
It reduces computations by separating objects from background using a very lite first-stage.
Resulting image proposals are then processed in the second-stage by a highly accurate model.
arXiv Detail & Related papers (2021-07-21T12:39:51Z) - Accelerating Deep Learning Applications in Space [0.0]
We investigate the performance of CNN-based object detectors on constrained devices.
We take a closer look at the Single Shot MultiBox Detector (SSD) and Region-based Fully Convolutional Network (R-FCN)
The performance is measured in terms of inference time, memory consumption, and accuracy.
arXiv Detail & Related papers (2020-07-21T21:06:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.