Salient Object Detection by LTP Texture Characterization on Opposing
Color Pairs under SLICO Superpixel Constraint
- URL: http://arxiv.org/abs/2201.00439v1
- Date: Mon, 3 Jan 2022 00:03:50 GMT
- Title: Salient Object Detection by LTP Texture Characterization on Opposing
Color Pairs under SLICO Superpixel Constraint
- Authors: Didier Ndayikengurukiye and Max Mignotte
- Abstract summary: We propose a novel strategy, through a simple model, which generates a robust saliency map for a natural image.
This strategy consists of integrating color information into local textural patterns to characterize a color micro-texture.
Our model is both simple and efficient, outperforming several state-of-the-art models.
- Score: 10.228984414156931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The effortless detection of salient objects by humans has been the subject of
research in several fields, including computer vision as it has many
applications. However, salient object detection remains a challenge for many
computer models dealing with color and textured images. Herein, we propose a
novel and efficient strategy, through a simple model, almost without internal
parameters, which generates a robust saliency map for a natural image. This
strategy consists of integrating color information into local textural patterns
to characterize a color micro-texture. Most models in the literature that use
the color and texture features treat them separately. In our case, it is the
simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to
opposing color pairs of a color space that allows us to achieve this end. Each
color micro-texture is represented by vector whose components are from a
superpixel obtained by SLICO (Simple Linear Iterative Clustering with zero
parameter) algorithm which is simple, fast and exhibits state-of-the-art
boundary adherence. The degree of dissimilarity between each pair of color
micro-texture is computed by the FastMap method, a fast version of MDS
(Multi-dimensional Scaling), that considers the color micro-textures
non-linearity while preserving their distances. These degrees of dissimilarity
give us an intermediate saliency map for each RGB, HSL, LUV and CMY color
spaces. The final saliency map is their combination to take advantage of the
strength of each of them. The MAE (Mean Absolute Error) and F$_{\beta}$
measures of our saliency maps, on the complex ECSSD dataset show that our model
is both simple and efficient, outperforming several state-of-the-art models.
Related papers
- Large-scale and Efficient Texture Mapping Algorithm via Loopy Belief
Propagation [4.742825811314168]
A texture mapping algorithm must be able to efficiently select views, fuse and map textures from these views to mesh models.
Existing approaches achieve efficiency either by limiting the number of images to one view per face, or simplifying global inferences to only achieve local color consistency.
This paper proposes a novel and efficient texture mapping framework that allows the use of multiple views of texture per face.
arXiv Detail & Related papers (2023-05-08T15:11:28Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Spherical Space Feature Decomposition for Guided Depth Map
Super-Resolution [123.04455334124188]
Guided depth map super-resolution (GDSR) aims to upsample low-resolution (LR) depth maps with additional information involved in high-resolution (HR) RGB images from the same scene.
In this paper, we propose the Spherical Space feature Decomposition Network (SSDNet) to solve the above issues.
Our method can achieve state-of-the-art results on four test datasets, as well as successfully generalize to real-world scenes.
arXiv Detail & Related papers (2023-03-15T21:22:21Z) - SPSN: Superpixel Prototype Sampling Network for RGB-D Salient Object
Detection [5.2134203335146925]
RGB-D salient object detection (SOD) has been in the spotlight recently because it is an important preprocessing operation for various vision tasks.
Despite advances in deep learning-based methods, RGB-D SOD is still challenging due to the large domain gap between an RGB image and the depth map and low-quality depth maps.
We propose a novel superpixel prototype sampling network architecture to solve this problem.
arXiv Detail & Related papers (2022-07-16T10:43:14Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - Scale Invariant Semantic Segmentation with RGB-D Fusion [12.650574326251023]
We propose a neural network architecture for scale-invariant semantic segmentation using RGB-D images.
We incorporate depth information to the RGB data for pixel-wise semantic segmentation to address the different scale objects in an outdoor scene.
Our model is compact and can be easily applied to the other RGB model.
arXiv Detail & Related papers (2022-04-10T12:54:27Z) - RGB-D Saliency Detection via Cascaded Mutual Information Minimization [122.8879596830581]
Existing RGB-D saliency detection models do not explicitly encourage RGB and depth to achieve effective multi-modal learning.
We introduce a novel multi-stage cascaded learning framework via mutual information minimization to "explicitly" model the multi-modal information between RGB image and depth data.
arXiv Detail & Related papers (2021-09-15T12:31:27Z) - Deep Texture-Aware Features for Camouflaged Object Detection [69.84122372541506]
This paper formulates texture-aware refinement modules to learn the texture-aware features in a deep convolutional neural network.
We evaluate our network on the benchmark dataset for camouflaged object detection both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-02-05T04:38:32Z) - Consistent Mesh Colors for Multi-View Reconstructed 3D Scenes [13.531166759820854]
We find that the method for aggregation of multiple views is crucial for creating consistent texture maps without color calibration.
We compute a color prior from the cross-correlation of view faces and the faces view to identify an optimal per-face color.
arXiv Detail & Related papers (2021-01-26T11:59:23Z) - Color-complexity enabled exhaustive color-dots identification and
spatial patterns testing in images [0.6299766708197881]
We develop a new color-identification algorithm based on highly associative relations among the three color-coordinates: RGB or HSV.
Our developments are illustrated in images obtained by mimicking chemical spraying via drone in Precision Agriculture.
arXiv Detail & Related papers (2020-07-28T21:06:12Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.