Object Detection performance variation on compressed satellite image
datasets with iquaflow
- URL: http://arxiv.org/abs/2301.05892v2
- Date: Wed, 18 Jan 2023 14:21:07 GMT
- Title: Object Detection performance variation on compressed satellite image
datasets with iquaflow
- Authors: Pau Gall\'es, Katalin Takats and Javier Marin
- Abstract summary: iquaflow is designed to study image quality and model performance variation given an alteration of the image dataset.
We do a showcase study about object detection models adoption on a public image dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A lot of work has been done to reach the best possible performance of
predictive models on images. There are fewer studies about the resilience of
these models when they are trained on image datasets that suffer modifications
altering their original quality. Yet this is a common problem that is often
encountered in the industry. A good example of that is with earth observation
satellites that are capturing many images. The energy and time of connection to
the earth of an orbiting satellite are limited and must be carefully used. An
approach to mitigate that is to compress the images on board before
downloading. The compression can be regulated depending on the intended usage
of the image and the requirements of this application. We present a new
software tool with the name iquaflow that is designed to study image quality
and model performance variation given an alteration of the image dataset.
Furthermore, we do a showcase study about oriented object detection models
adoption on a public image dataset DOTA Xia_2018_CVPR given different
compression levels. The optimal compression point is found and the usefulness
of iquaflow becomes evident.
Related papers
- Community Forensics: Using Thousands of Generators to Train Fake Image Detectors [15.166026536032142]
One of the key challenges of detecting AI-generated images is spotting images that have been created by previously unseen generative models.
We propose a new dataset that is significantly larger and more diverse than prior work.
The resulting dataset contains 2.7M images that have been sampled from 4803 different models.
arXiv Detail & Related papers (2024-11-06T18:59:41Z) - Deep Image Composition Meets Image Forgery [0.0]
Image forgery has been studied for many years.
Deep learning models require large amounts of labeled data for training.
We use state of the art image composition deep learning models to generate spliced images close to the quality of real-life manipulations.
arXiv Detail & Related papers (2024-04-03T17:54:37Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Zero shot framework for satellite image restoration [25.163783640750573]
We propose a distortion disentanglement and knowledge distillation framework for satellite image restoration.
Our algorithm requires only two images: the distorted satellite image to be restored and a reference image with similar semantics.
arXiv Detail & Related papers (2023-06-05T14:34:58Z) - Masked Transformer for image Anomaly Localization [14.455765147827345]
We propose a new model for image anomaly detection based on Vision Transformer architecture with patch masking.
We show that multi-resolution patches and their collective embeddings provide a large improvement in the model's performance.
The proposed model has been tested on popular anomaly detection datasets such as MVTec and head CT.
arXiv Detail & Related papers (2022-10-27T15:30:48Z) - IQUAFLOW: A new framework to measure image quality [0.0]
iquaflow provides a set of tools to assess image quality.
The user can add custom metrics that can be easily integrated.
iquaflow allows to measure quality by using the performance of AI models trained on the images as a proxy.
arXiv Detail & Related papers (2022-10-24T14:10:17Z) - TINYCD: A (Not So) Deep Learning Model For Change Detection [68.8204255655161]
The aim of change detection (CD) is to detect changes occurred in the same area by comparing two images of that place taken at different times.
Recent developments in the field of deep learning enabled researchers to achieve outstanding performance in this area.
We propose a novel model, called TinyCD, demonstrating to be both lightweight and effective.
arXiv Detail & Related papers (2022-07-26T19:28:48Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.