A Classification-Aware Super-Resolution Framework for Ship Targets in SAR Imagery
- URL: http://arxiv.org/abs/2508.06407v1
- Date: Fri, 08 Aug 2025 15:50:40 GMT
- Title: A Classification-Aware Super-Resolution Framework for Ship Targets in SAR Imagery
- Authors: Ch Muhammad Awais, Marco Reggiannini, Davide Moroni, Oktay Karakus,
- Abstract summary: High-resolution imagery plays a critical role in improving the performance of visual recognition tasks such as classification, detection, and segmentation.<n>To address this, super-resolution (SR) techniques have been widely adopted to attempt to reconstruct high-resolution images from low-resolution inputs.<n>We propose a novel methodology that increases the resolution of synthetic aperture radar imagery by optimising loss functions that account for both image quality and classification performance.
- Score: 6.0018610735178894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-resolution imagery plays a critical role in improving the performance of visual recognition tasks such as classification, detection, and segmentation. In many domains, including remote sensing and surveillance, low-resolution images can limit the accuracy of automated analysis. To address this, super-resolution (SR) techniques have been widely adopted to attempt to reconstruct high-resolution images from low-resolution inputs. Related traditional approaches focus solely on enhancing image quality based on pixel-level metrics, leaving the relationship between super-resolved image fidelity and downstream classification performance largely underexplored. This raises a key question: can integrating classification objectives directly into the super-resolution process further improve classification accuracy? In this paper, we try to respond to this question by investigating the relationship between super-resolution and classification through the deployment of a specialised algorithmic strategy. We propose a novel methodology that increases the resolution of synthetic aperture radar imagery by optimising loss functions that account for both image quality and classification performance. Our approach improves image quality, as measured by scientifically ascertained image quality indicators, while also enhancing classification accuracy.
Related papers
- HRSeg: High-Resolution Visual Perception and Enhancement for Reasoning Segmentation [74.1872891313184]
HRSeg is an efficient model with high-resolution fine-grained perception.<n>It features two key innovations: High-Resolution Perception (HRP) and High-Resolution Enhancement (HRE)
arXiv Detail & Related papers (2025-07-17T08:09:31Z) - Adaptive Object Detection with ESRGAN-Enhanced Resolution & Faster R-CNN [1.3107174618549584]
Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) and Faster Region-Convolutional Neural Network (Faster R-CNN) are proposed.<n>ESRGAN enhances low-quality images, restoring details and improving clarity.<n>Faster R-CNN performs accurate object detection on the enhanced images.
arXiv Detail & Related papers (2025-06-10T05:49:54Z) - Exploring Resolution and Degradation Clues as Self-supervised Signal for
Low Quality Object Detection [77.3530907443279]
We propose a novel self-supervised framework to detect objects in degraded low resolution images.
Our methods has achieved superior performance compared with existing methods when facing variant degradation situations.
arXiv Detail & Related papers (2022-08-05T09:36:13Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - Semantically Accurate Super-Resolution Generative Adversarial Networks [2.0454959820861727]
We propose a novel architecture and domain-specific feature loss to increase the performance of semantic segmentation.
We show the proposed approach improves perceived image quality as well as quantitative segmentation accuracy across all prediction classes.
This work demonstrates that jointly considering image-based and task-specific losses can improve the performance of both, and advances the state-of-the-art in semantic-aware super-resolution of aerial imagery.
arXiv Detail & Related papers (2022-05-17T23:05:27Z) - High Quality Segmentation for Ultra High-resolution Images [72.97958314291648]
We propose the Continuous Refinement Model for the ultra high-resolution segmentation refinement task.
Our proposed method is fast and effective on image segmentation refinement.
arXiv Detail & Related papers (2021-11-29T11:53:06Z) - Analysis and evaluation of Deep Learning based Super-Resolution
algorithms to improve performance in Low-Resolution Face Recognition [0.0]
Super-resolution algorithms may be able to recover the discriminant properties of the subjects involved.
This project aimed at evaluating and adapting different deep neural network architectures for the task of face super-resolution.
Experiments showed that general super-resolution architectures might enhance face verification performance of deep neural networks trained on high-resolution faces.
arXiv Detail & Related papers (2021-01-19T02:41:57Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.