Boundary IoU: Improving Object-Centric Image Segmentation Evaluation
- URL: http://arxiv.org/abs/2103.16562v1
- Date: Tue, 30 Mar 2021 17:59:20 GMT
- Title: Boundary IoU: Improving Object-Centric Image Segmentation Evaluation
- Authors: Bowen Cheng and Ross Girshick and Piotr Doll\'ar and Alexander C. Berg
and Alexander Kirillov
- Abstract summary: We present Boundary IoU, a new segmentation evaluation measure focused on boundary quality.
Boundary IoU is significantly more sensitive than the standard Mask IoU measure to boundary errors for large objects and does not over-penalize errors on smaller objects.
- Score: 125.20898025044804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Boundary IoU (Intersection-over-Union), a new segmentation
evaluation measure focused on boundary quality. We perform an extensive
analysis across different error types and object sizes and show that Boundary
IoU is significantly more sensitive than the standard Mask IoU measure to
boundary errors for large objects and does not over-penalize errors on smaller
objects. The new quality measure displays several desirable characteristics
like symmetry w.r.t. prediction/ground truth pairs and balanced responsiveness
across scales, which makes it more suitable for segmentation evaluation than
other boundary-focused measures like Trimap IoU and F-measure. Based on
Boundary IoU, we update the standard evaluation protocols for instance and
panoptic segmentation tasks by proposing the Boundary AP (Average Precision)
and Boundary PQ (Panoptic Quality) metrics, respectively. Our experiments show
that the new evaluation metrics track boundary quality improvements that are
generally overlooked by current Mask IoU-based evaluation metrics. We hope that
the adoption of the new boundary-sensitive evaluation metrics will lead to
rapid progress in segmentation methods that improve boundary quality.
Related papers
- Size-invariance Matters: Rethinking Metrics and Losses for Imbalanced Multi-object Salient Object Detection [133.66006666465447]
Current metrics are size-sensitive, where larger objects are focused, and smaller ones tend to be ignored.
We argue that the evaluation should be size-invariant because bias based on size is unjustified without additional semantic information.
We develop an optimization framework tailored to this goal, achieving considerable improvements in detecting objects of different sizes.
arXiv Detail & Related papers (2024-05-16T03:01:06Z) - Revisiting Evaluation Metrics for Semantic Segmentation: Optimization
and Evaluation of Fine-grained Intersection over Union [113.20223082664681]
We propose the use of fine-grained mIoUs along with corresponding worst-case metrics.
These fine-grained metrics offer less bias towards large objects, richer statistical information, and valuable insights into model and dataset auditing.
Our benchmark study highlights the necessity of not basing evaluations on a single metric and confirms that fine-grained mIoUs reduce the bias towards large objects.
arXiv Detail & Related papers (2023-10-30T03:45:15Z) - SortedAP: Rethinking evaluation metrics for instance segmentation [8.079566596963632]
We show that most existing metrics have a limited resolution of segmentation quality.
We propose a new metric called sortedAP, which strictly decreases with both object- and pixel-level imperfections.
arXiv Detail & Related papers (2023-09-09T22:50:35Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - PNM: Pixel Null Model for General Image Segmentation [17.971090313814447]
We present a prior model that weights each pixel according to its probability of being correctly classified by a random segmenter.
Experiments on semantic, instance, and panoptic segmentation tasks over three datasets confirm that PNM consistently improves the segmentation quality.
We propose a new metric, textitPNM IoU, which perceives the boundary sharpness and better reflects the model segmentation performance in error-prone regions.
arXiv Detail & Related papers (2022-03-13T15:17:41Z) - Weighted Intersection over Union (wIoU) for Evaluating Image Segmentation [0.9790236766474198]
We propose a new evaluation measure called weighted Intersection over Union (wIoU) for semantic segmentation.
First, it builds a weight map generated from a boundary distance map, allowing weighted evaluation for each pixel based on a boundary importance factor.
We validated the effectiveness of wIoU on a dataset of 33 scenes and demonstrated its flexibility.
arXiv Detail & Related papers (2021-07-21T02:59:59Z) - Multiscale IoU: A Metric for Evaluation of Salient Object Detection with
Fine Structures [5.098461305284216]
General-purpose object-detection algorithms often dismiss the fine structure of detected objects.
We present a new metric that is a marriage of a popular evaluation metric, namely Intersection over Union (IoU), and a geometrical concept, called fractal dimension.
We show that MIoU is indeed sensitive to the fine boundary structures which are completely overlooked by IoU and f1-score.
arXiv Detail & Related papers (2021-05-30T15:31:42Z) - Look Closer to Segment Better: Boundary Patch Refinement for Instance
Segmentation [51.59290734837372]
We propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality.
The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark.
By applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard.
arXiv Detail & Related papers (2021-04-12T07:10:48Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.