Evaluating Salient Object Detection in Natural Images with Multiple
Objects having Multi-level Saliency
- URL: http://arxiv.org/abs/2003.08514v1
- Date: Thu, 19 Mar 2020 00:06:40 GMT
- Title: Evaluating Salient Object Detection in Natural Images with Multiple
Objects having Multi-level Saliency
- Authors: G\"okhan Yildirim, Debashis Sen, Mohan Kankanhalli, and Sabine
S\"usstrunk
- Abstract summary: Salient object detection is evaluated using binary ground truth with the labels being salient object class and background.
Our dataset, named SalMoN (saliency in multi-object natural images), has 588 images containing multiple objects.
- Score: 3.464871689508835
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Salient object detection is evaluated using binary ground truth with the
labels being salient object class and background. In this paper, we corroborate
based on three subjective experiments on a novel image dataset that objects in
natural images are inherently perceived to have varying levels of importance.
Our dataset, named SalMoN (saliency in multi-object natural images), has 588
images containing multiple objects. The subjective experiments performed record
spontaneous attention and perception through eye fixation duration, point
clicking and rectangle drawing. As object saliency in a multi-object image is
inherently multi-level, we propose that salient object detection must be
evaluated for the capability to detect all multi-level salient objects apart
from the salient object class detection capability. For this purpose, we
generate multi-level maps as ground truth corresponding to all the dataset
images using the results of the subjective experiments, with the labels being
multi-level salient objects and background. We then propose the use of mean
absolute error, Kendall's rank correlation and average area under
precision-recall curve to evaluate existing salient object detection methods on
our multi-level saliency ground truth dataset. Approaches that represent
saliency detection on images as local-global hierarchical processing of a graph
perform well in our dataset.
Related papers
- Retrieval Robust to Object Motion Blur [54.34823913494456]
We propose a method for object retrieval in images that are affected by motion blur.
We present the first large-scale datasets for blurred object retrieval.
Our method outperforms state-of-the-art retrieval methods on the new blur-retrieval datasets.
arXiv Detail & Related papers (2024-04-27T23:22:39Z) - Learning-based Relational Object Matching Across Views [63.63338392484501]
We propose a learning-based approach which combines local keypoints with novel object-level features for matching object detections between RGB images.
We train our object-level matching features based on appearance and inter-frame and cross-frame spatial relations between objects in an associative graph neural network.
arXiv Detail & Related papers (2023-05-03T19:36:51Z) - Salient Object Detection for Images Taken by People With Vision
Impairments [13.157939981657886]
We introduce a new salient object detection dataset using images taken by people who are visually impaired.
VizWiz-SalientObject is the largest (i.e., 32,000 human-annotated images) and contains unique characteristics.
We benchmarked seven modern salient object detection methods on our dataset and found they struggle most with images featuring large, have less complex boundaries, and lack text.
arXiv Detail & Related papers (2023-01-12T22:33:01Z) - Automatic dataset generation for specific object detection [6.346581421948067]
We present a method to synthesize object-in-scene images, which can preserve the objects' detailed features without bringing irrelevant information.
Our result shows that in the synthesized image, the boundaries of objects blend very well with the background.
arXiv Detail & Related papers (2022-07-16T07:44:33Z) - ObjectFormer for Image Manipulation Detection and Localization [118.89882740099137]
We propose ObjectFormer to detect and localize image manipulations.
We extract high-frequency features of the images and combine them with RGB features as multimodal patch embeddings.
We conduct extensive experiments on various datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-03-28T12:27:34Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - A Simple and Effective Use of Object-Centric Images for Long-Tailed
Object Detection [56.82077636126353]
We take advantage of object-centric images to improve object detection in scene-centric images.
We present a simple yet surprisingly effective framework to do so.
Our approach can improve the object detection (and instance segmentation) accuracy of rare objects by 50% (and 33%) relatively.
arXiv Detail & Related papers (2021-02-17T17:27:21Z) - Multiple instance learning on deep features for weakly supervised object
detection with extreme domain shifts [1.9336815376402716]
Weakly supervised object detection (WSOD) using only image-level annotations has attracted a growing attention over the past few years.
We show that a simple multiple instance approach applied on pre-trained deep features yields excellent performances on non-photographic datasets.
arXiv Detail & Related papers (2020-08-03T20:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.