An Underwater Image Semantic Segmentation Method Focusing on Boundaries
and a Real Underwater Scene Semantic Segmentation Dataset
- URL: http://arxiv.org/abs/2108.11727v1
- Date: Thu, 26 Aug 2021 12:05:08 GMT
- Title: An Underwater Image Semantic Segmentation Method Focusing on Boundaries
and a Real Underwater Scene Semantic Segmentation Dataset
- Authors: Zhiwei Ma, Haojie Li, Zhihui Wang, Dan Yu, Tianyi Wang, Yingshuang Gu,
Xin Fan, and Zhongxuan Luo
- Abstract summary: We label and establish the first underwater semantic segmentation dataset of real scene(DUT-USEG:DUT Underwater dataset)
We propose a semi-supervised underwater semantic segmentation network focusing on the boundaries(US-Net: Underwater Network)
Experiments show that the proposed method improves by 6.7% in three categories of holothurian, echinus, starfish in DUT-USEG dataset, and state-of-the-art results.
- Score: 41.842352295729555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of underwater object grabbing technology, underwater
object recognition and segmentation of high accuracy has become a challenge.
The existing underwater object detection technology can only give the general
position of an object, unable to give more detailed information such as the
outline of the object, which seriously affects the grabbing efficiency. To
address this problem, we label and establish the first underwater semantic
segmentation dataset of real scene(DUT-USEG:DUT Underwater Segmentation
Dataset). The DUT- USEG dataset includes 6617 images, 1487 of which have
semantic segmentation and instance segmentation annotations, and the remaining
5130 images have object detection box annotations. Based on this dataset, we
propose a semi-supervised underwater semantic segmentation network focusing on
the boundaries(US-Net: Underwater Segmentation Network). By designing a pseudo
label generator and a boundary detection subnetwork, this network realizes the
fine learning of boundaries between underwater objects and background, and
improves the segmentation effect of boundary areas. Experiments show that the
proposed method improves by 6.7% in three categories of holothurian, echinus,
starfish in DUT-USEG dataset, and achieves state-of-the-art results. The DUT-
USEG dataset will be released at https://github.com/baxiyi/DUT-USEG.
Related papers
- Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - A Dataset with Multibeam Forward-Looking Sonar for Underwater Object
Detection [0.0]
Multibeam forward-looking sonar (MFLS) plays an important role in underwater detection.
There are several challenges to the research on underwater object detection with MFLS.
We present a novel dataset, consisting of over 9000 MFLS images captured using Tritech Gemini 1200ik sonar.
arXiv Detail & Related papers (2022-12-01T08:26:03Z) - An Interpretable Deep Semantic Segmentation Method for Earth Observation [0.7499722271664145]
We introduce a prototype-based interpretable deep semantic segmentation (IDSS) method.
Its parameters are in orders of magnitude less than the number of parameters used by deep networks such as U-Net and are clearly interpretable by humans.
Results have demonstrated that IDSS could surpass other algorithms, including U-Net, in terms of IoU (Intersection over Union) total water and Recall total water.
arXiv Detail & Related papers (2022-10-23T18:46:44Z) - ATLANTIS: A Benchmark for Semantic Segmentation of Waterbody Images [11.694400268453366]
We present ATLANTIS, a new benchmark for semantic segmentation of waterbodies and related objects.
ATLANTIS consists of 5,195 images of waterbodies, as well as high quality pixel-level manual annotations of 56 classes of objects.
A novel deep neural network, AQUANet, is developed for waterbody semantic segmentation by processing the aquatic and non-aquatic regions in two different paths.
arXiv Detail & Related papers (2021-11-22T22:56:14Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.