ITSELF: Iterative Saliency Estimation fLexible Framework
- URL: http://arxiv.org/abs/2006.16956v2
- Date: Fri, 24 Jul 2020 19:16:37 GMT
- Title: ITSELF: Iterative Saliency Estimation fLexible Framework
- Authors: Leonardo de Melo Joao, Felipe de Castro Belem, Alexandre Xavier Falcao
- Abstract summary: Saliency object detection estimates the objects that most stand out in an image.
We propose a superpixel-based ITerative Saliency Estimation fLexible Framework (ITSELF) that allows any user-defined assumptions to be added to the model.
We compare ITSELF to two state-of-the-art saliency estimators on five metrics and six datasets.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency object detection estimates the objects that most stand out in an
image. The available unsupervised saliency estimators rely on a pre-determined
set of assumptions of how humans perceive saliency to create discriminating
features. By fixing the pre-selected assumptions as an integral part of their
models, these methods cannot be easily extended for specific settings and
different image domains. We then propose a superpixel-based ITerative Saliency
Estimation fLexible Framework (ITSELF) that allows any user-defined assumptions
to be added to the model when required. Thanks to recent advancements in
superpixel segmentation algorithms, saliency-maps can be used to improve
superpixel delineation. By combining a saliency-based superpixel algorithm to a
superpixel-based saliency estimator, we propose a novel saliency/superpixel
self-improving loop to iteratively enhance saliency maps. We compare ITSELF to
two state-of-the-art saliency estimators on five metrics and six datasets, four
of which are composed of natural-images, and two of biomedical-images.
Experiments show that our approach is more robust than the compared methods,
presenting competitive results on natural-image datasets and outperforming them
on biomedical-image datasets.
Related papers
- Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel Contrast [7.092718945468069]
Domain adaptation aims to reduce the model degradation on the target domain caused by the domain shift between the source and target domains.
Probabilistic proto-typical pixel contrast (PPPC) is a universal adaptation framework that models each pixel embedding as a probability.
PPPC not only helps to address ambiguity at the pixel level, yielding discriminative representations but also significant improvements in both synthetic-to-real and day-to-night adaptation tasks.
arXiv Detail & Related papers (2024-09-27T08:25:03Z) - Learning Invariant Inter-pixel Correlations for Superpixel Generation [12.605604620139497]
Learnable features exhibit constrained discriminative capability, resulting in unsatisfactory pixel grouping performance.
We propose the Content Disentangle Superpixel algorithm to selectively separate the invariant inter-pixel correlations and statistical properties.
The experimental results on four benchmark datasets demonstrate the superiority of our approach to existing state-of-the-art methods.
arXiv Detail & Related papers (2024-02-28T09:46:56Z) - Towards Generalizable Medical Image Segmentation with Pixel-wise
Uncertainty Estimation [5.415458419290347]
Deep neural networks (DNNs) achieve promising performance in visual recognition under the independent and identically distributed (IID) hypothesis.
IID hypothesis is not universally guaranteed in numerous real-world applications, especially in medical image analysis.
We propose a novel framework that utilizes uncertainty estimation to highlight hard-to-classified pixels for DNNs.
arXiv Detail & Related papers (2023-05-13T10:09:40Z) - Probabilistic Deep Metric Learning for Hyperspectral Image
Classification [91.5747859691553]
This paper proposes a probabilistic deep metric learning framework for hyperspectral image classification.
It aims to predict the category of each pixel for an image captured by hyperspectral sensors.
Our framework can be readily applied to existing hyperspectral image classification methods.
arXiv Detail & Related papers (2022-11-15T17:57:12Z) - Efficient Multiscale Object-based Superpixel Framework [62.48475585798724]
We propose a novel superpixel framework, named Superpixels through Iterative CLEarcutting (SICLE)
SICLE exploits object information being able to generate a multiscale segmentation on-the-fly.
It generalizes recent superpixel methods, surpassing them and other state-of-the-art approaches in efficiency and effectiveness according to multiple delineation metrics.
arXiv Detail & Related papers (2022-04-07T15:59:38Z) - Saliency Enhancement using Superpixel Similarity [77.34726150561087]
Saliency Object Detection (SOD) has several applications in image analysis.
Deep-learning-based SOD methods are among the most effective, but they may miss foreground parts with similar colors.
We introduce a post-processing method, named textitSaliency Enhancement over Superpixel Similarity (SESS)
We demonstrate that SESS can consistently and considerably improve the results of three deep-learning-based SOD methods on five image datasets.
arXiv Detail & Related papers (2021-12-01T17:22:54Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Hybrid Generative Models for Two-Dimensional Datasets [5.206057210246861]
Two-dimensional array-based datasets are pervasive in a variety of domains.
Current approaches for generative modeling have typically been limited to conventional image datasets.
We propose a novel approach for generating two-dimensional datasets by moving the computations to the space of representation bases.
arXiv Detail & Related papers (2021-06-01T03:21:47Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.