Underwater object classification combining SAS and transferred
optical-to-SAS Imagery
- URL: http://arxiv.org/abs/2304.11875v1
- Date: Mon, 24 Apr 2023 07:42:16 GMT
- Title: Underwater object classification combining SAS and transferred
optical-to-SAS Imagery
- Authors: Avi Abu and Roee Diamant
- Abstract summary: We propose a multi-modal combination to discriminate between man-made targets and objects such as rocks or litter.
We offer a novel classification algorithm that overcomes the problem of intensity and object formation differences between the two modalities.
Results from 7,052 pairs of SAS and optical images collected during sea experiments show improved classification performance.
- Score: 12.607649347048442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining synthetic aperture sonar (SAS) imagery with optical images for
underwater object classification has the potential to overcome challenges such
as water clarity, the stability of the optical image analysis platform, and
strong reflections from the seabed for sonar-based classification. In this
work, we propose this type of multi-modal combination to discriminate between
man-made targets and objects such as rocks or litter. We offer a novel
classification algorithm that overcomes the problem of intensity and object
formation differences between the two modalities. To this end, we develop a
novel set of geometrical shape descriptors that takes into account the
geometrical relation between the objects shadow and highlight. Results from
7,052 pairs of SAS and optical images collected during several sea experiments
show improved classification performance compared to the state-of-the-art for
better discrimination between different types of underwater objects. For
reproducibility, we share our database.
Related papers
- UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Separated Attention: An Improved Cycle GAN Based Under Water Image Enhancement Method [0.0]
We have utilized the cycle consistent learning technique of the state-of-the-art Cycle GAN model with modification in the loss function.
We trained the Cycle GAN model with the modified loss functions on the benchmarked Enhancing Underwater Visual Perception dataset.
The upgraded images provide better results from conventional models and further for under water navigation, pose estimation, saliency prediction, object detection and tracking.
arXiv Detail & Related papers (2024-04-11T11:12:06Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - SUCRe: Leveraging Scene Structure for Underwater Color Restoration [1.9490160607392462]
We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration.
We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments.
arXiv Detail & Related papers (2022-12-18T16:53:13Z) - UIF: An Objective Quality Assessment for Underwater Image Enhancement [17.145844358253164]
We propose an Underwater Image Fidelity (UIF) metric for objective evaluation of enhanced underwater images.
By exploiting the statistical features of these images, we present to extract naturalness-related, sharpness-related, and structure-related features.
Experimental results confirm that the proposed UIF outperforms a variety of underwater and general-purpose image quality metrics.
arXiv Detail & Related papers (2022-05-19T08:43:47Z) - Comparison of convolutional neural networks for cloudy optical images
reconstruction from single or multitemporal joint SAR and optical images [0.21079694661943604]
We focus on the evaluation of convolutional neural networks that use jointly SAR and optical images to retrieve the missing contents in one single polluted optical image.
We propose a simple framework that eases the creation of datasets for the training of deep nets targeting optical image reconstruction.
We show how space partitioning data structures help to query samples in terms of cloud coverage, relative acquisition date, pixel validity and relative proximity between SAR and optical images.
arXiv Detail & Related papers (2022-04-01T13:31:23Z) - Image-to-Height Domain Translation for Synthetic Aperture Sonar [3.2662392450935416]
In this work, we focus on collection geometry with respect to isotropic and anisotropic textures.
The low grazing angle of the collection geometry, combined with orientation of the sonar path relative to anisotropic texture, poses a significant challenge for image-alignment and other multi-view scene understanding frameworks.
arXiv Detail & Related papers (2021-12-12T19:53:14Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.