Multi-Frame Quality Enhancement On Compressed Video Using Quantised Data
of Deep Belief Networks
- URL: http://arxiv.org/abs/2201.11389v1
- Date: Thu, 27 Jan 2022 09:14:57 GMT
- Title: Multi-Frame Quality Enhancement On Compressed Video Using Quantised Data
of Deep Belief Networks
- Authors: Dionne Takudzwa Chasi, Mkhuseli Ngxande
- Abstract summary: In the age of streaming and surveillance compressed video enhancement has become a problem in need of constant improvement.
This approach consists of making use of the frames that have the peak quality in the region to improve those that have a lower quality in that region.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the age of streaming and surveillance compressed video enhancement has
become a problem in need of constant improvement. Here, we investigate a way of
improving the Multi-Frame Quality Enhancement approach. This approach consists
of making use of the frames that have the peak quality in the region to improve
those that have a lower quality in that region. This approach consists of
obtaining quantized data from the videos using a deep belief network. The
quantized data is then fed into the MF-CNN architecture to improve the
compressed video. We further investigate the impact of using a Bi-LSTM for
detecting the peak quality frames. Our approach obtains better results than the
first approach of the MFQE which uses an SVM for PQF detection. On the other
hand, our MFQE approach does not outperform the latest version of the MQFE
approach that uses a Bi-LSTM for PQF detection.
Related papers
- RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content [7.283653823423298]
We propose a novel blind deep video quality assessment (VQA) method specifically for enhanced video content.
It employs a new Recurrent Memory Transformer (RMT) based network architecture to obtain video quality representations.
The extracted quality representations are then combined through linear regression to generate video-level quality indices.
arXiv Detail & Related papers (2024-05-14T14:01:15Z) - Compression-Realized Deep Structural Network for Video Quality Enhancement [78.13020206633524]
This paper focuses on the task of quality enhancement for compressed videos.
Most of the existing methods lack a structured design to optimally leverage the priors within compression codecs.
A new paradigm is urgently needed for a more conscious'' process of quality enhancement.
arXiv Detail & Related papers (2024-05-10T09:18:17Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - FloLPIPS: A Bespoke Video Quality Metric for Frame Interpoation [4.151439675744056]
We present a bespoke full reference video quality metric for VFI, FloLPIPS, that builds on the popular perceptual image quality metric, LPIPS.
FloLPIPS shows superior correlation performance with subjective ground truth over 12 popular quality assessors.
arXiv Detail & Related papers (2022-07-17T09:07:33Z) - PeQuENet: Perceptual Quality Enhancement of Compressed Video with
Adaptation- and Attention-based Network [27.375830262287163]
We propose a generative adversarial network (GAN) framework to enhance the perceptual quality of compressed videos.
Our framework includes attention and adaptation to different quantization parameters (QPs) in a single model.
Experimental results demonstrate the superior performance of the proposed PeQuENet compared with the state-of-the-art compressed video quality enhancement algorithms.
arXiv Detail & Related papers (2022-06-16T02:49:28Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - Boosting the Performance of Video Compression Artifact Reduction with
Reference Frame Proposals and Frequency Domain Information [31.053879834073502]
We propose an effective reference frame proposal strategy to boost the performance of the existing multi-frame approaches.
Experimental results show that our method achieves better fidelity and perceptual performance on MFQE 2.0 dataset than the state-of-the-art methods.
arXiv Detail & Related papers (2021-05-31T13:46:11Z) - Video Quality Enhancement Using Deep Learning-Based Prediction Models
for Quantized DCT Coefficients in MPEG I-frames [0.0]
We propose a MPEG video decoder based on the frequency-to-frequency domain.
It reads the quantized DCT coefficients received from a low-quality I-frames bitstream and, using a deep learning-based model, predicts the missing coefficients in order to recompose the same frames with enhanced quality.
arXiv Detail & Related papers (2020-10-09T16:41:18Z) - Multi-level Wavelet-based Generative Adversarial Network for Perceptual
Quality Enhancement of Compressed Video [51.631731922593225]
Existing methods mainly focus on enhancing the objective quality of compressed video while ignoring its perceptual quality.
We propose a novel generative adversarial network (GAN) based on multi-level wavelet packet transform (WPT) to enhance the perceptual quality of compressed video.
arXiv Detail & Related papers (2020-08-02T15:01:38Z) - Learning for Video Compression with Hierarchical Quality and Recurrent
Enhancement [164.7489982837475]
We propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network.
In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides.
arXiv Detail & Related papers (2020-03-04T09:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.