SRCNet: Seminal Representation Collaborative Network for Marine Oil
Spill Segmentation
- URL: http://arxiv.org/abs/2304.14500v1
- Date: Mon, 17 Apr 2023 13:23:03 GMT
- Title: SRCNet: Seminal Representation Collaborative Network for Marine Oil
Spill Segmentation
- Authors: Fang Chen, Heiko Balzter, Peng Ren and Huiyu Zhou
- Abstract summary: We propose an effective oil spill image segmentation network named SRCNet.
It is constructed with a pair of deep neural nets with the collaboration of the seminal representation that describes SAR images.
Our proposed SRCNet operates effective oil spill segmentation in an economical and efficient manner.
- Score: 18.96012241344086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective oil spill segmentation in Synthetic Aperture Radar (SAR) images is
critical for marine oil pollution cleanup, and proper image representation is
helpful for accurate image segmentation. In this paper, we propose an effective
oil spill image segmentation network named SRCNet by leveraging SAR image
representation and the training for oil spill segmentation simultaneously.
Specifically, our proposed segmentation network is constructed with a pair of
deep neural nets with the collaboration of the seminal representation that
describes SAR images, where one deep neural net is the generative net which
strives to produce oil spill segmentation maps, and the other is the
discriminative net which trys its best to distinguish between the produced and
the true segmentations, and they thus built a two-player game. Particularly,
the seminal representation exploited in our proposed SRCNet originates from SAR
imagery, modelling with the internal characteristics of SAR images. Thus, in
the training process, the collaborated seminal representation empowers the
mapped generative net to produce accurate oil spill segmentation maps
efficiently with small amount of training data, promoting the discriminative
net reaching its optimal solution at a fast speed. Therefore, our proposed
SRCNet operates effective oil spill segmentation in an economical and efficient
manner. Additionally, to increase the segmentation capability of the proposed
segmentation network in terms of accurately delineating oil spill details in
SAR images, a regularisation term that penalises the segmentation loss is
devised. This encourages our proposed SRCNet for accurately segmenting oil
spill areas from SAR images. Empirical experimental evaluations from different
metrics validate the effectiveness of our proposed SRCNet for oil spill image
segmentation.
Related papers
- Deep Attention Unet: A Network Model with Global Feature Perception
Ability [12.087640144194246]
This paper proposes a new type of UNet image segmentation algorithm based on channel self attention mechanism and residual connection called.
In my experiment, the new network model improved mIOU by 2.48% compared to traditional UNet on the FoodNet dataset.
arXiv Detail & Related papers (2023-04-21T09:12:29Z) - DGNet: Distribution Guided Efficient Learning for Oil Spill Image
Segmentation [18.43215454505496]
Successful implementation of oil spill segmentation in Synthetic Aperture Radar (SAR) images is vital for marine environmental protection.
We develop an effective segmentation framework named DGNet, which performs oil spill segmentation by incorporating the intrinsic distribution of backscatter values in SAR images.
We evaluate the segmentation performance of our proposed DGNet with different metrics, and experimental evaluations demonstrate its effective segmentations.
arXiv Detail & Related papers (2022-12-19T18:23:50Z) - CRCNet: Few-shot Segmentation with Cross-Reference and Region-Global
Conditional Networks [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
We propose a Cross-Reference and Local-Global Networks (CRCNet) for few-shot segmentation.
Our network can better find the co-occurrent objects in the two images with a cross-reference mechanism.
arXiv Detail & Related papers (2022-08-23T06:46:18Z) - A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images [10.147351262526282]
A network based on the widely used encoderdecoder architecture is proposed to accomplish the synthetic aperture radar (SAR) images segmentation.
With the better representation capability of optical images, we propose to enrich SAR images with generated optical images via the generative adversative network (GAN) trained by numerous SAR and optical images.
arXiv Detail & Related papers (2022-06-02T15:22:29Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Oil Spill SAR Image Segmentation via Probability Distribution Modelling [18.72207562693259]
This work aims to develop an effective segmentation method which addresses marine oil spill identification in SAR images.
We revisit the SAR imaging mechanism in order to attain the probability distribution representation of oil spill SAR images.
We then exploit the distribution representation to formulate the segmentation energy functional, by which oil spill characteristics are incorporated.
arXiv Detail & Related papers (2021-12-17T17:22:29Z) - Two-Stage Self-Supervised Cycle-Consistency Network for Reconstruction
of Thin-Slice MR Images [62.4428833931443]
The thick-slice magnetic resonance (MR) images are often structurally blurred in coronal and sagittal views.
Deep learning has shown great potential to re-construct the high-resolution (HR) thin-slice MR images from those low-resolution (LR) cases.
We propose a novel Two-stage Self-supervised Cycle-consistency Network (TSCNet) for MR slice reconstruction.
arXiv Detail & Related papers (2021-06-29T13:29:18Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - Image Segmentation Using Hybrid Representations [2.414172101538764]
We introduce an end-to-end U-Net based network called DU-Net for medical image segmentation.
SC are translation invariant and Lipschitz continuous to deformations which help DU-Net outperform other conventional CNN counterparts.
The proposed method shows remarkable improvement over the basic U-Net with performance competitive to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-15T13:07:35Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z) - Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning [86.45526827323954]
Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training.
We propose an iterative algorithm to learn such pairwise relations.
We show that the proposed algorithm performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2020-02-19T10:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.