Weakly-Supervised Semantic Segmentation of Circular-Scan,
Synthetic-Aperture-Sonar Imagery
- URL: http://arxiv.org/abs/2401.11313v1
- Date: Sat, 20 Jan 2024 19:55:36 GMT
- Title: Weakly-Supervised Semantic Segmentation of Circular-Scan,
Synthetic-Aperture-Sonar Imagery
- Authors: Isaac J. Sledge, Dominic M. Byrne, Jonathan L. King, Steven H.
Ostertag, Denton L. Woods, James L. Prater, Jermaine L. Kennedy, Timothy M.
Marston, Jose C. Principe
- Abstract summary: We propose a weakly-supervised framework for the semantic segmentation of circular-scan synthetic-aperture-sonar (CSAS) imagery.
We show that our framework performs comparably to nine fully-supervised deep networks.
We achieve state-of-the-art performance when pre-training on natural imagery.
- Score: 3.5534342430133514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a weakly-supervised framework for the semantic segmentation of
circular-scan synthetic-aperture-sonar (CSAS) imagery. The first part of our
framework is trained in a supervised manner, on image-level labels, to uncover
a set of semi-sparse, spatially-discriminative regions in each image. The
classification uncertainty of each region is then evaluated. Those areas with
the lowest uncertainties are then chosen to be weakly labeled segmentation
seeds, at the pixel level, for the second part of the framework. Each of the
seed extents are progressively resized according to an unsupervised,
information-theoretic loss with structured-prediction regularizers. This
reshaping process uses multi-scale, adaptively-weighted features to delineate
class-specific transitions in local image content. Content-addressable memories
are inserted at various parts of our framework so that it can leverage features
from previously seen images to improve segmentation performance for related
images.
We evaluate our weakly-supervised framework using real-world CSAS imagery
that contains over ten seafloor classes and ten target classes. We show that
our framework performs comparably to nine fully-supervised deep networks. Our
framework also outperforms eleven of the best weakly-supervised deep networks.
We achieve state-of-the-art performance when pre-training on natural imagery.
The average absolute performance gap to the next-best weakly-supervised network
is well over ten percent for both natural imagery and sonar imagery. This gap
is found to be statistically significant.
Related papers
- Progressive Feature Self-reinforcement for Weakly Supervised Semantic
Segmentation [55.69128107473125]
We propose a single-stage approach for Weakly Supervised Semantic (WSSS) with image-level labels.
We adaptively partition the image content into deterministic regions (e.g., confident foreground and background) and uncertain regions (e.g., object boundaries and misclassified categories) for separate processing.
Building upon this, we introduce a complementary self-enhancement method that constrains the semantic consistency between these confident regions and an augmented image with the same class labels.
arXiv Detail & Related papers (2023-12-14T13:21:52Z) - A Lightweight Clustering Framework for Unsupervised Semantic
Segmentation [28.907274978550493]
Unsupervised semantic segmentation aims to categorize each pixel in an image into a corresponding class without the use of annotated data.
We propose a lightweight clustering framework for unsupervised semantic segmentation.
Our framework achieves state-of-the-art results on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-11-30T15:33:42Z) - ISLE: A Framework for Image Level Semantic Segmentation Ensemble [5.137284292672375]
Conventional semantic segmentation networks require massive pixel-wise annotated labels to reach state-of-the-art prediction quality.
We propose ISLE, which employs an ensemble of the "pseudo-labels" for a given set of different semantic segmentation techniques on a class-wise level.
We reach up to 2.4% improvement over ISLE's individual components.
arXiv Detail & Related papers (2023-03-14T13:36:36Z) - Semi-supervised domain adaptation with CycleGAN guided by a downstream
task loss [4.941630596191806]
Domain adaptation is of huge interest as labeling is an expensive and error-prone task.
Image-to-image approaches can be used to mitigate the shift in the input.
We propose a "task aware" version of a GAN in an image-to-image domain adaptation approach.
arXiv Detail & Related papers (2022-08-18T13:13:30Z) - Dense Siamese Network [86.23741104851383]
We present Dense Siamese Network (DenseSiam), a simple unsupervised learning framework for dense prediction tasks.
It learns visual representations by maximizing the similarity between two views of one image with two types of consistency, i.e., pixel consistency and region consistency.
It surpasses state-of-the-art segmentation methods by 2.1 mIoU with 28% training costs.
arXiv Detail & Related papers (2022-03-21T15:55:23Z) - Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and
Semi-Supervised Semantic Segmentation [119.009033745244]
This paper presents a Self-supervised Low-Rank Network ( SLRNet) for single-stage weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS)
SLRNet uses cross-view self-supervision, that is, it simultaneously predicts several attentive LR representations from different views of an image to learn precise pseudo-labels.
Experiments on the Pascal VOC 2012, COCO, and L2ID datasets demonstrate that our SLRNet outperforms both state-of-the-art WSSS and SSSS methods with a variety of different settings.
arXiv Detail & Related papers (2022-03-19T09:19:55Z) - A Simple Baseline for Zero-shot Semantic Segmentation with Pre-trained
Vision-language Model [61.58071099082296]
It is unclear how to make zero-shot recognition working well on broader vision problems, such as object detection and semantic segmentation.
In this paper, we target for zero-shot semantic segmentation, by building it on an off-the-shelf pre-trained vision-language model, i.e., CLIP.
Our experimental results show that this simple framework surpasses previous state-of-the-arts by a large margin.
arXiv Detail & Related papers (2021-12-29T18:56:18Z) - Unsupervised Image Segmentation by Mutual Information Maximization and
Adversarial Regularization [7.165364364478119]
We propose a novel fully unsupervised semantic segmentation method, the so-called Information Maximization and Adrial Regularization (InMARS)
Inspired by human perception which parses a scene into perceptual groups, our proposed approach first partitions an input image into meaningful regions (also known as superpixels)
Next, it utilizes Mutual-Information-Maximization followed by an adversarial training strategy to cluster these regions into semantically meaningful classes.
Our experiments demonstrate that our method achieves the state-of-the-art performance on two commonly used unsupervised semantic segmentation datasets.
arXiv Detail & Related papers (2021-07-01T18:36:27Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.