Towards Top-Down Just Noticeable Difference Estimation of Natural Images
- URL: http://arxiv.org/abs/2108.05058v1
- Date: Wed, 11 Aug 2021 06:51:50 GMT
- Title: Towards Top-Down Just Noticeable Difference Estimation of Natural Images
- Authors: Qiuping Jiang, Zhentao Liu, Shiqi Wang, Feng Shao, Weisi Lin
- Abstract summary: Just noticeable difference (JND) estimation mainly dedicates to modeling the visibility masking effects of different factors in spatial and frequency domains.
In this work, we turn to a dramatically different way to address these problems with a top-down design philosophy.
Our proposed JND model can achieve better performance than several latest JND models.
- Score: 65.14746063298415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing efforts on Just noticeable difference (JND) estimation mainly
dedicate to modeling the visibility masking effects of different factors in
spatial and frequency domains, and then fusing them into an overall JND
estimate. However, the overall visibility masking effect can be related with
more contributing factors beyond those have been considered in the literature
and it is also insufficiently accurate to formulate the masking effect even for
an individual factor. Moreover, the potential interactions among different
masking effects are also difficult to be characterized with a simple fusion
model. In this work, we turn to a dramatically different way to address these
problems with a top-down design philosophy. Instead of formulating and fusing
multiple masking effects in a bottom-up way, the proposed JND estimation model
directly generates a critical perceptual lossless (CPL) image from a top-down
perspective and calculates the difference map between the original image and
the CPL image as the final JND map. Given an input image, an adaptively
critical point (perceptual lossless threshold), defined as the minimum number
of spectral components in Karhunen-Lo\'{e}ve Transform (KLT) used for
perceptual lossless image reconstruction, is derived by exploiting the
convergence characteristics of KLT coefficient energy. Then, the CPL image can
be reconstructed via inverse KLT according to the derived critical point.
Finally, the difference map between the original image and the CPL image is
calculated as the JND map. The performance of the proposed JND model is
evaluated with two applications including JND-guided noise injection and
JND-guided image compression. Experimental results have demonstrated that our
proposed JND model can achieve better performance than several latest JND
models.
Related papers
- FCDM: Sparse-view Sinogram Inpainting with Frequency Domain Convolution Enhanced Diffusion Models [8.057037609493824]
Reducing the radiation dose in computed tomography (CT) is crucial, but it often results in sparse-view CT, where the number of available projections is significantly reduced.
Sinogram inpainting enables accurate image reconstruction with limited projections.
Existing models performing well on conventional RGB images for inpainting mostly fail in the case of sinograms.
We propose a novel model called the Frequency Convolution Diffusion Model (FCDM)
It employs frequency domain convolutions to extract frequency information from various angles and capture the intricate relationships between these angles.
arXiv Detail & Related papers (2024-08-26T12:31:38Z) - SG-JND: Semantic-Guided Just Noticeable Distortion Predictor For Image Compression [50.2496399381438]
Just noticeable distortion (JND) represents the threshold of distortion in an image that is minimally perceptible to the human visual system.
Traditional JND prediction methods only rely on pixel-level or sub-band level features.
We propose a Semantic-Guided JND network to leverage semantic information for JND prediction.
arXiv Detail & Related papers (2024-08-08T07:14:57Z) - IQNet: Image Quality Assessment Guided Just Noticeable Difference
Prefiltering For Versatile Video Coding [0.9403328689534943]
Image prefiltering with just noticeable distortion (JND) improves coding efficiency in a visual way by filtering the perceptually redundant information prior to compression.
This paper proposes a fine-grained JND prefiltering dataset guided by image quality assessment for accurate block-level JND modeling.
arXiv Detail & Related papers (2023-12-15T13:58:10Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - Image Inpainting with Learnable Feature Imputation [8.293345261434943]
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image.
We propose (layer-wise) feature imputation of the missing input values to a convolution.
We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
arXiv Detail & Related papers (2020-11-02T16:05:32Z) - Performance analysis of weighted low rank model with sparse image
histograms for face recognition under lowlevel illumination and occlusion [0.0]
The purpose of Low-rank approximation matrix (LRMA) models is to recover the underlying low-rank matrix from its degraded observation.
In this paper, a comparison of the low-rank approximation of LRMARPC- and WSNM is brought out.
The paper also discusses the trends from the experimental results performed through the application of these algorithms.
arXiv Detail & Related papers (2020-07-24T05:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.