PNM: Pixel Null Model for General Image Segmentation
- URL: http://arxiv.org/abs/2203.06677v1
- Date: Sun, 13 Mar 2022 15:17:41 GMT
- Title: PNM: Pixel Null Model for General Image Segmentation
- Authors: Han Zhang, Zihao Zhang, Wenhao Zheng, Wei Xu
- Abstract summary: We present a prior model that weights each pixel according to its probability of being correctly classified by a random segmenter.
Experiments on semantic, instance, and panoptic segmentation tasks over three datasets confirm that PNM consistently improves the segmentation quality.
We propose a new metric, textitPNM IoU, which perceives the boundary sharpness and better reflects the model segmentation performance in error-prone regions.
- Score: 17.971090313814447
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A major challenge in image segmentation is classifying object boundaries.
Recent efforts propose to refine the segmentation result with boundary masks.
However, models are still prone to misclassifying boundary pixels even when
they correctly capture the object contours. In such cases, even a perfect
boundary map is unhelpful for segmentation refinement. In this paper, we argue
that assigning proper prior weights to error-prone pixels such as object
boundaries can significantly improve the segmentation quality. Specifically, we
present the \textit{pixel null model} (PNM), a prior model that weights each
pixel according to its probability of being correctly classified by a random
segmenter. Empirical analysis shows that PNM captures the misclassification
distribution of different state-of-the-art (SOTA) segmenters. Extensive
experiments on semantic, instance, and panoptic segmentation tasks over three
datasets (Cityscapes, ADE20K, MS COCO) confirm that PNM consistently improves
the segmentation quality of most SOTA methods (including the vision
transformers) and outperforms boundary-based methods by a large margin. We also
observe that the widely-used mean IoU (mIoU) metric is insensitive to
boundaries of different sharpness. As a byproduct, we propose a new metric,
\textit{PNM IoU}, which perceives the boundary sharpness and better reflects
the model segmentation performance in error-prone regions.
Related papers
- View-Consistent Hierarchical 3D Segmentation Using Ultrametric Feature Fields [52.08335264414515]
We learn a novel feature field within a Neural Radiance Field (NeRF) representing a 3D scene.
Our method takes view-inconsistent multi-granularity 2D segmentations as input and produces a hierarchy of 3D-consistent segmentations as output.
We evaluate our method and several baselines on synthetic datasets with multi-view images and multi-granular segmentation, showcasing improved accuracy and viewpoint-consistency.
arXiv Detail & Related papers (2024-05-30T04:14:58Z) - Enhancing Boundary Segmentation for Topological Accuracy with Skeleton-based Methods [7.646983689651424]
Topological consistency plays a crucial role in the task of boundary segmentation for reticular images.
We propose the Skea-Topo Aware loss, which is a novel loss function that takes into account the shape of each object and topological significance of the pixels.
Experiments prove that our method improves topological consistency by up to 7 points in VI compared to 13 state-of-art methods.
arXiv Detail & Related papers (2024-04-29T09:27:31Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z) - Look Closer to Segment Better: Boundary Patch Refinement for Instance
Segmentation [51.59290734837372]
We propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality.
The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark.
By applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard.
arXiv Detail & Related papers (2021-04-12T07:10:48Z) - Superpixel Segmentation Based on Spatially Constrained Subspace
Clustering [57.76302397774641]
We consider each representative region with independent semantic information as a subspace, and formulate superpixel segmentation as a subspace clustering problem.
We show that a simple integration of superpixel segmentation with the conventional subspace clustering does not effectively work due to the spatial correlation of the pixels.
We propose a novel convex locality-constrained subspace clustering model that is able to constrain the spatial adjacent pixels with similar attributes to be clustered into a superpixel.
arXiv Detail & Related papers (2020-12-11T06:18:36Z) - AinnoSeg: Panoramic Segmentation with High Perfomance [4.867465475957119]
Current panoramic segmentation algorithms are more concerned with context semantics, but the details of image are not processed enough.
Aiming to address these issues, this paper presents some useful tricks.
All these operations named AinnoSeg, AinnoSeg can achieve state-of-art performance on the well-known dataset ADE20K.
arXiv Detail & Related papers (2020-07-21T04:16:46Z) - Improving Semantic Segmentation via Decoupled Body and Edge Supervision [89.57847958016981]
Existing semantic segmentation approaches either aim to improve the object's inner consistency by modeling the global context, or refine objects detail along their boundaries by multi-scale feature fusion.
In this paper, a new paradigm for semantic segmentation is proposed.
Our insight is that appealing performance of semantic segmentation requires textitexplicitly modeling the object textitbody and textitedge, which correspond to the high and low frequency of the image.
We show that the proposed framework with various baselines or backbone networks leads to better object inner consistency and object boundaries.
arXiv Detail & Related papers (2020-07-20T12:11:22Z) - SegFix: Model-Agnostic Boundary Refinement for Segmentation [75.58050758615316]
We present a model-agnostic post-processing scheme to improve the boundary quality for the segmentation result that is generated by any existing segmentation model.
Motivated by the empirical observation that the label predictions of interior pixels are more reliable, we propose to replace the originally unreliable predictions of boundary pixels by the predictions of interior pixels.
arXiv Detail & Related papers (2020-07-08T17:08:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.