Full RGB Just Noticeable Difference (JND) Modelling
- URL: http://arxiv.org/abs/2203.00629v1
- Date: Tue, 1 Mar 2022 17:16:57 GMT
- Title: Full RGB Just Noticeable Difference (JND) Modelling
- Authors: Jian Jin, Dong Yu, Weisi Lin, Lili Meng, Hao Wang, Huaxiang Zhang
- Abstract summary: Just Noticeable Difference (JND) has many applications in multimedia signal processing.
We propose a JND model to generate the JND by taking the characteristics of full RGB channels into account.
An RGB-JND-NET is proposed, where the visual content in full RGB channels is used to extract features for JND generation.
- Score: 69.42889006770018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Just Noticeable Difference (JND) has many applications in multimedia signal
processing, especially for visual data processing up to date. It's generally
defined as the minimum visual content changes that the human can perspective,
which has been studied for decades. However, most of the existing methods only
focus on the luminance component of JND modelling and simply regard chrominance
components as scaled versions of luminance. In this paper, we propose a JND
model to generate the JND by taking the characteristics of full RGB channels
into account, termed as the RGB-JND. To this end, an RGB-JND-NET is proposed,
where the visual content in full RGB channels is used to extract features for
JND generation. To supervise the JND generation, an adaptive image quality
assessment combination (AIC) is developed. Besides, the RDB-JND-NET also takes
the visual attention into account by automatically mining the underlying
relationship between visual attention and the JND, which is further used to
constrain the JND spatial distribution. To the best of our knowledge, this is
the first work on careful investigation of JND modelling for full-color space.
Experimental results demonstrate that the RGB-JND-NET model outperforms the
relevant state-of-the-art JND models. Besides, the JND of the red and blue
channels are larger than that of the green one according to the experimental
results of the proposed model, which demonstrates that more changes can be
tolerated in the red and blue channels, in line with the well-known fact that
the human visual system is more sensitive to the green channel in comparison
with the red and blue ones.
Related papers
- Diffusion-based RGB-D Semantic Segmentation with Deformable Attention Transformer [10.982521876026281]
We introduce a diffusion-based framework to address the RGB-D semantic segmentation problem.
We demonstrate that utilizing a Deformable Attention Transformer as the encoder to extract features from depth images effectively captures the characteristics of invalid regions in depth measurements.
arXiv Detail & Related papers (2024-09-23T15:23:01Z) - SG-JND: Semantic-Guided Just Noticeable Distortion Predictor For Image Compression [50.2496399381438]
Just noticeable distortion (JND) represents the threshold of distortion in an image that is minimally perceptible to the human visual system.
Traditional JND prediction methods only rely on pixel-level or sub-band level features.
We propose a Semantic-Guided JND network to leverage semantic information for JND prediction.
arXiv Detail & Related papers (2024-08-08T07:14:57Z) - The First Comprehensive Dataset with Multiple Distortion Types for
Visual Just-Noticeable Differences [40.50003266570956]
This work establishes a generalized JND dataset with a coarse-to-fine JND selection, which contains 106 source images and 1,642 JND maps, covering 25 distortion types.
A fine JND selection is carried out on the JND candidates with a crowdsourced subjective assessment.
arXiv Detail & Related papers (2023-03-05T03:12:57Z) - HVS-Inspired Signal Degradation Network for Just Noticeable Difference
Estimation [69.49393407465456]
We propose an HVS-inspired signal degradation network for JND estimation.
We analyze the HVS perceptual process in JND subjective viewing to obtain relevant insights.
We show that the proposed method achieves the SOTA performance for accurately estimating the redundancy of the HVS.
arXiv Detail & Related papers (2022-08-16T07:53:45Z) - Modality-Guided Subnetwork for Salient Object Detection [5.491692465987937]
Most RGBD networks require multi-modalities from the input side and feed them separately through a two-stream design.
We present in this paper a novel fusion design named modality-guided subnetwork (MGSnet)
It has the following superior designs: 1) Our model works for both RGB and RGBD data, and dynamically estimating depth if not available.
arXiv Detail & Related papers (2021-10-10T20:59:11Z) - Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks [0.0]
Deep neural networks allow generating artificial spectral information, such as for the image colorization problem.
We study the generative adversarial network (GAN) approach in the task of the NIR band generation using just RGB channels of high-resolution satellite imagery.
arXiv Detail & Related papers (2021-06-13T15:14:57Z) - Siamese Network for RGB-D Salient Object Detection and Beyond [113.30063105890041]
A novel framework is proposed to learn from both RGB and depth inputs through a shared network backbone.
Comprehensive experiments using five popular metrics show that the designed framework yields a robust RGB-D saliency detector.
We also link JL-DCF to the RGB-D semantic segmentation field, showing its capability of outperforming several semantic segmentation models.
arXiv Detail & Related papers (2020-08-26T06:01:05Z) - Data-Level Recombination and Lightweight Fusion Scheme for RGB-D Salient
Object Detection [73.31632581915201]
We propose a novel data-level recombination strategy to fuse RGB with D (depth) before deep feature extraction.
A newly lightweight designed triple-stream network is applied over these novel formulated data to achieve an optimal channel-wise complementary fusion status between the RGB and D.
arXiv Detail & Related papers (2020-08-07T10:13:05Z) - Cross-Modal Weighting Network for RGB-D Salient Object Detection [76.0965123893641]
We propose a novel Cross-Modal Weighting (CMW) strategy to encourage comprehensive interactions between RGB and depth channels for RGB-D SOD.
Specifically, three RGB-depth interaction modules, named CMW-L, CMW-M and CMW-H, are developed to deal with respectively low-, middle- and high-level cross-modal information fusion.
CMWNet consistently outperforms 15 state-of-the-art RGB-D SOD methods on seven popular benchmarks.
arXiv Detail & Related papers (2020-07-09T16:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.