HVS-Inspired Signal Degradation Network for Just Noticeable Difference
Estimation
- URL: http://arxiv.org/abs/2208.07583v1
- Date: Tue, 16 Aug 2022 07:53:45 GMT
- Title: HVS-Inspired Signal Degradation Network for Just Noticeable Difference
Estimation
- Authors: Jian Jin, Yuan Xue, Xingxing Zhang, Lili Meng, Yao Zhao, Weisi Lin
- Abstract summary: We propose an HVS-inspired signal degradation network for JND estimation.
We analyze the HVS perceptual process in JND subjective viewing to obtain relevant insights.
We show that the proposed method achieves the SOTA performance for accurately estimating the redundancy of the HVS.
- Score: 69.49393407465456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Significant improvement has been made on just noticeable difference (JND)
modelling due to the development of deep neural networks, especially for the
recently developed unsupervised-JND generation models. However, they have a
major drawback that the generated JND is assessed in the real-world signal
domain instead of in the perceptual domain in the human brain. There is an
obvious difference when JND is assessed in such two domains since the visual
signal in the real world is encoded before it is delivered into the brain with
the human visual system (HVS). Hence, we propose an HVS-inspired signal
degradation network for JND estimation. To achieve this, we carefully analyze
the HVS perceptual process in JND subjective viewing to obtain relevant
insights, and then design an HVS-inspired signal degradation (HVS-SD) network
to represent the signal degradation in the HVS. On the one hand, the well
learnt HVS-SD enables us to assess the JND in the perceptual domain. On the
other hand, it provides more accurate prior information for better guiding JND
generation. Additionally, considering the requirement that reasonable JND
should not lead to visual attention shifting, a visual attention loss is
proposed to control JND generation. Experimental results demonstrate that the
proposed method achieves the SOTA performance for accurately estimating the
redundancy of the HVS. Source code will be available at
https://github.com/jianjin008/HVS-SD-JND.
Related papers
- DT-JRD: Deep Transformer based Just Recognizable Difference Prediction Model for Video Coding for Machines [48.07705666485972]
Just Recognizable Difference (JRD) represents the minimum visual difference that is detectable by machine vision.
We propose a Deep Transformer based JRD (DT-JRD) prediction model for Video Coding for Machines (VCM)
The accurately predicted JRD can be used reduce the coding bit rate while maintaining the accuracy of machine tasks.
arXiv Detail & Related papers (2024-11-14T09:34:36Z) - SG-JND: Semantic-Guided Just Noticeable Distortion Predictor For Image Compression [50.2496399381438]
Just noticeable distortion (JND) represents the threshold of distortion in an image that is minimally perceptible to the human visual system.
Traditional JND prediction methods only rely on pixel-level or sub-band level features.
We propose a Semantic-Guided JND network to leverage semantic information for JND prediction.
arXiv Detail & Related papers (2024-08-08T07:14:57Z) - The First Comprehensive Dataset with Multiple Distortion Types for
Visual Just-Noticeable Differences [40.50003266570956]
This work establishes a generalized JND dataset with a coarse-to-fine JND selection, which contains 106 source images and 1,642 JND maps, covering 25 distortion types.
A fine JND selection is carried out on the JND candidates with a crowdsourced subjective assessment.
arXiv Detail & Related papers (2023-03-05T03:12:57Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - Full RGB Just Noticeable Difference (JND) Modelling [69.42889006770018]
Just Noticeable Difference (JND) has many applications in multimedia signal processing.
We propose a JND model to generate the JND by taking the characteristics of full RGB channels into account.
An RGB-JND-NET is proposed, where the visual content in full RGB channels is used to extract features for JND generation.
arXiv Detail & Related papers (2022-03-01T17:16:57Z) - Does deep machine vision have just noticeable difference (JND)? [74.68805484753442]
There is little exploration on the existence of Just Noticeable Difference (JND) for AI, like Deep Machine Vision (DMV)
In this paper, we take an initial attempt, and demonstrate that DMV does have the JND, termed as DMVJND.
It has been discovered that DMV can tolerate distorted images with average PSNR of only 9.56dB (the lower the better), by generating JND via unsupervised learning with our DMVJND-NET.
arXiv Detail & Related papers (2021-02-16T14:19:35Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.