Learning Ordinality in Semantic Segmentation
- URL: http://arxiv.org/abs/2407.20959v1
- Date: Tue, 30 Jul 2024 16:36:15 GMT
- Title: Learning Ordinality in Semantic Segmentation
- Authors: Rafael Cristino, Ricardo P. M. Cruz, Jaime S. Cardoso,
- Abstract summary: It is known that the pupil is inside the iris, and the lane markings are inside the road.
Conventional deep learning models do not take advantage of ordinal relations that might exist in the domain at hand.
This paper proposes pixel-wise ordinal segmentation methods, which treat each pixel as an independent observation and promote ordinality in its representation.
- Score: 3.017721041662511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation consists of predicting a semantic label for each image pixel. Conventional deep learning models do not take advantage of ordinal relations that might exist in the domain at hand. For example, it is known that the pupil is inside the iris, and the lane markings are inside the road. Such domain knowledge can be employed as constraints to make the model more robust. The current literature on this topic has explored pixel-wise ordinal segmentation methods, which treat each pixel as an independent observation and promote ordinality in its representation. This paper proposes novel spatial ordinal segmentation methods, which take advantage of the structured image space by considering each pixel as an observation dependent on its neighborhood context to also promote ordinal spatial consistency. When evaluated with five biomedical datasets and multiple configurations of autonomous driving datasets, ordinal methods resulted in more ordinally-consistent models, with substantial improvements in ordinal metrics and some increase in the Dice coefficient. It was also shown that the incorporation of ordinal consistency results in models with better generalization abilities.
Related papers
- EmerDiff: Emerging Pixel-level Semantic Knowledge in Diffusion Models [52.3015009878545]
We develop an image segmentor capable of generating fine-grained segmentation maps without any additional training.
Our framework identifies semantic correspondences between image pixels and spatial locations of low-dimensional feature maps.
In extensive experiments, the produced segmentation maps are demonstrated to be well delineated and capture detailed parts of the images.
arXiv Detail & Related papers (2024-01-22T07:34:06Z) - Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration [50.62725807357586]
This article presents a general Bayesian learning framework for multi-modal groupwise image registration.
We propose a novel hierarchical variational auto-encoding architecture to realise the inference procedure of the latent variables.
Experiments were conducted to validate the proposed framework, including four different datasets from cardiac, brain, and abdominal medical images.
arXiv Detail & Related papers (2024-01-04T08:46:39Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Unsupervised Domain Adaptation for Medical Image Segmentation via
Feature-space Density Matching [0.0]
This paper presents an unsupervised domain adaptation approach for semantic segmentation.
We match the target data distribution to the source in the feature space, particularly when the number of target samples is limited.
We demonstrate the efficacy of our proposed approach on 2 datasets, multisite prostate MRI and histopathology images.
arXiv Detail & Related papers (2023-05-09T22:24:46Z) - Stochastic Segmentation with Conditional Categorical Diffusion Models [3.8168879948759953]
We propose a conditional categorical diffusion model (CCDM) for semantic segmentation based on Denoising Diffusion Probabilistic Models.
Our results show that CCDM achieves state-of-the-art performance on LIDC, and outperforms established baselines on the classical segmentation dataset Cityscapes.
arXiv Detail & Related papers (2023-03-15T19:16:47Z) - CORE: Learning Consistent Ordinal REpresentations for Image Ordinal
Estimation [35.39143939072549]
This paper proposes learning intrinsic Consistent Ordinal REpresentations (CORE) from ordinal relations residing in groundtruth labels.
CORE can accurately construct an ordinal latent space and significantly enhance existing deep ordinal regression methods to achieve better results.
arXiv Detail & Related papers (2023-01-15T15:42:26Z) - SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic
Segmentation [52.62441404064957]
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the model trained on a labeled source domain.
Many methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts.
We propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel.
arXiv Detail & Related papers (2022-04-19T11:16:29Z) - BI-GCN: Boundary-Aware Input-Dependent Graph Convolution Network for
Biomedical Image Segmentation [21.912509900254364]
We apply graph convolution into the segmentation task and propose an improved textitLaplacian.
Our method outperforms the state-of-the-art approaches on the segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images.
arXiv Detail & Related papers (2021-10-27T21:12:27Z) - Unsupervised Image Segmentation by Mutual Information Maximization and
Adversarial Regularization [7.165364364478119]
We propose a novel fully unsupervised semantic segmentation method, the so-called Information Maximization and Adrial Regularization (InMARS)
Inspired by human perception which parses a scene into perceptual groups, our proposed approach first partitions an input image into meaningful regions (also known as superpixels)
Next, it utilizes Mutual-Information-Maximization followed by an adversarial training strategy to cluster these regions into semantically meaningful classes.
Our experiments demonstrate that our method achieves the state-of-the-art performance on two commonly used unsupervised semantic segmentation datasets.
arXiv Detail & Related papers (2021-07-01T18:36:27Z) - Attention-Guided Supervised Contrastive Learning for Semantic
Segmentation [16.729068267453897]
In a per-pixel prediction task, more than one label can exist in a single image for segmentation.
We propose an attention-guided supervised contrastive learning approach to highlight a single semantic object every time as the target.
arXiv Detail & Related papers (2021-06-03T05:01:11Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Rethinking Semantic Segmentation Evaluation for Explainability and Model
Selection [12.786648212233116]
We introduce a new metric to assess region-based over- and under-segmentation.
We analyze and compare it to other metrics, demonstrating that the use of our metric lends greater explainability to semantic segmentation model performance in real-world applications.
arXiv Detail & Related papers (2021-01-21T03:12:43Z) - Unsupervised Learning of Image Segmentation Based on Differentiable
Feature Clustering [14.074732867392008]
The usage of convolutional neural networks (CNNs) for unsupervised image segmentation was investigated in this study.
We present a novel end-to-end network of unsupervised image segmentation that consists of normalization and an argmax function for differentiable clustering.
Third, we present an extension of the proposed method for segmentation with scribbles as user input, which showed better accuracy than existing methods.
arXiv Detail & Related papers (2020-07-20T10:28:36Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z) - Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning [86.45526827323954]
Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training.
We propose an iterative algorithm to learn such pairwise relations.
We show that the proposed algorithm performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2020-02-19T10:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.