Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment
- URL: http://arxiv.org/abs/2404.19567v1
- Date: Tue, 30 Apr 2024 13:55:30 GMT
- Title: Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment
- Authors: Lei Wang, Desen Yuan,
- Abstract summary: We propose to build a trustworthy IQA model via Causal Perception inspired Representation Learning (CPRL)
CPRL serves as the causation of the subjective quality label, which is invariant to the imperceptible adversarial perturbations.
Experiments on four benchmark databases show that the proposed CPRL method outperforms many state-of-the-art adversarial defense methods.
- Score: 2.290956583394892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite great success in modeling visual perception, deep neural network based image quality assessment (IQA) still remains unreliable in real-world applications due to its vulnerability to adversarial perturbations and the inexplicit black-box structure. In this paper, we propose to build a trustworthy IQA model via Causal Perception inspired Representation Learning (CPRL), and a score reflection attack method for IQA model. More specifically, we assume that each image is composed of Causal Perception Representation (CPR) and non-causal perception representation (N-CPR). CPR serves as the causation of the subjective quality label, which is invariant to the imperceptible adversarial perturbations. Inversely, N-CPR presents spurious associations with the subjective quality label, which may significantly change with the adversarial perturbations. To extract the CPR from each input image, we develop a soft ranking based channel-wise activation function to mediate the causally sufficient (beneficial for high prediction accuracy) and necessary (beneficial for high robustness) deep features, and based on intervention employ minimax game to optimize. Experiments on four benchmark databases show that the proposed CPRL method outperforms many state-of-the-art adversarial defense methods and provides explicit model interpretation.
Related papers
- Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives [15.575900555433863]
We introduce a new framework of correlation-error-based attacks that perturb both the correlation within an image set and score changes on individual images.
Our research focuses on ranking-related correlation metrics like Spearman's Rank-Order Correlation Coefficient (SROCC) and prediction error-related metrics like Mean Squared Error (MSE)
arXiv Detail & Related papers (2024-04-20T05:24:06Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Black-box Adversarial Attacks Against Image Quality Assessment Models [16.11900427447442]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
arXiv Detail & Related papers (2024-02-27T14:16:39Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - No-Reference Image Quality Assessment by Hallucinating Pristine Features [24.35220427707458]
We propose a no-reference (NR) image quality assessment (IQA) method via feature level pseudo-reference (PR) hallucination.
The effectiveness of our proposed method is demonstrated on four popular IQA databases.
arXiv Detail & Related papers (2021-08-09T16:48:34Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - Towards Robust Classification with Image Quality Assessment [0.9213700601337386]
Deep convolutional neural networks (DCNN) are vulnerable to adversarial examples and sensitive to perceptual quality as well as the acquisition condition of images.
In this paper, we investigate the connection between adversarial manipulation and image quality, then propose a protective mechanism.
Our method combines image quality assessment with knowledge distillation to detect input images that would trigger a DCCN to produce egregiously wrong results.
arXiv Detail & Related papers (2020-04-14T03:27:35Z) - Adversarial Attack on Deep Product Quantization Network for Image
Retrieval [74.85736968193879]
Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks.
Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations.
We propose product quantization adversarial generation (PQ-AG) to generate adversarial examples for product quantization based retrieval systems.
arXiv Detail & Related papers (2020-02-26T09:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.