(ASNA) An Attention-based Siamese-Difference Neural Network with
Surrogate Ranking Loss function for Perceptual Image Quality Assessment
- URL: http://arxiv.org/abs/2105.02531v1
- Date: Thu, 6 May 2021 09:04:21 GMT
- Title: (ASNA) An Attention-based Siamese-Difference Neural Network with
Surrogate Ranking Loss function for Perceptual Image Quality Assessment
- Authors: Seyed Mehdi Ayyoubzadeh, Ali Royat
- Abstract summary: Deep convolutional neural networks (DCNN) that leverage the adversarial training framework for image restoration and enhancement have significantly improved the processed images' sharpness.
It is necessary to develop a quantitative metric to reflect their performances, which is well-aligned with the perceived quality of an image.
This paper has proposed a convolutional neural network using an extension architecture of the traditional Siamese network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep convolutional neural networks (DCNN) that leverage the
adversarial training framework for image restoration and enhancement have
significantly improved the processed images' sharpness. Surprisingly, although
these DCNNs produced crispier images than other methods visually, they may get
a lower quality score when popular measures are employed for evaluating them.
Therefore it is necessary to develop a quantitative metric to reflect their
performances, which is well-aligned with the perceived quality of an image.
Famous quantitative metrics such as Peak signal-to-noise ratio (PSNR), The
structural similarity index measure (SSIM), and Perceptual Index (PI) are not
well-correlated with the mean opinion score (MOS) for an image, especially for
the neural networks trained with adversarial loss functions.
This paper has proposed a convolutional neural network using an extension
architecture of the traditional Siamese network so-called Siamese-Difference
neural network. We have equipped this architecture with the spatial and
channel-wise attention mechanism to increase our method's performance.
Finally, we employed an auxiliary loss function to train our model. The
suggested additional cost function surrogates ranking loss to increase
Spearman's rank correlation coefficient while it is differentiable concerning
the neural network parameters. Our method achieved superior performance in
\textbf{\textit{NTIRE 2021 Perceptual Image Quality Assessment}} Challenge. The
implementations of our proposed method are publicly available.
Related papers
- Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Spherical CNN for Medical Imaging Applications: Importance of
Equivariance in image reconstruction and denoising [0.0]
equivariant networks are efficient and high-performance approaches for tomography applications.
We evaluate the efficacy of equivariant spherical CNNs for 2- and 3- dimensional medical imaging problems.
We propose a novel approach to employ SCNNs as a complement to conventional image reconstruction tools.
arXiv Detail & Related papers (2023-07-06T21:18:47Z) - Increasing the Accuracy of a Neural Network Using Frequency Selective
Mesh-to-Grid Resampling [4.211128681972148]
We propose the use of keypoint frequency selective mesh-to-grid resampling (FSMR) for the processing of input data for neural networks.
We show that depending on the network architecture and classification task the application of FSMR during training aids learning process.
The classification accuracy can be increased by up to 4.31 percentage points for ResNet50 and the Oxflower17 dataset.
arXiv Detail & Related papers (2022-09-28T21:34:47Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - NeighCNN: A CNN based SAR Speckle Reduction using Feature preserving
Loss Function [1.7188280334580193]
NeighCNN is a deep learning-based speckle reduction algorithm that handles multiplicative noise.
Various synthetic, as well as real SAR images, are used for testing the NeighCNN architecture.
arXiv Detail & Related papers (2021-08-26T04:20:07Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.