A QP-adaptive Mechanism for CNN-based Filter in Video Coding
- URL: http://arxiv.org/abs/2010.13059v1
- Date: Sun, 25 Oct 2020 08:02:38 GMT
- Title: A QP-adaptive Mechanism for CNN-based Filter in Video Coding
- Authors: Chao Liu and Heming Sun and Jiro Katto and Xiaoyang Zeng and Yibo Fan
- Abstract summary: This paper presents a generic method to help an arbitrary CNN-filter handle different quantization noise.
When the quantization noise increases, the ability of the CNN-filter to suppress noise improves accordingly.
An additional BD-rate reduction of 0.2% is achieved by our proposed method for chroma components.
- Score: 26.1307267761763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural network (CNN)-based filters have achieved great success
in video coding. However, in most previous works, individual models are needed
for each quantization parameter (QP) band. This paper presents a generic method
to help an arbitrary CNN-filter handle different quantization noise. We model
the quantization noise problem and implement a feasible solution on CNN, which
introduces the quantization step (Qstep) into the convolution. When the
quantization noise increases, the ability of the CNN-filter to suppress noise
improves accordingly. This method can be used directly to replace the (vanilla)
convolution layer in any existing CNN-filters. By using only 25% of the
parameters, the proposed method achieves better performance than using multiple
models with VTM-6.3 anchor. Besides, an additional BD-rate reduction of 0.2% is
achieved by our proposed method for chroma components.
Related papers
- Compressing audio CNNs with graph centrality based filter pruning [20.028643659869573]
Convolutional neural networks (CNNs) are commonplace in high-performing solutions to many real-world problems.
CNNs have many parameters and filters, with some having a larger impact on the performance than others.
We propose a pruning framework that eliminates filters with the highest "commonality"
arXiv Detail & Related papers (2023-05-05T09:38:05Z) - GHN-Q: Parameter Prediction for Unseen Quantized Convolutional
Architectures via Graph Hypernetworks [80.29667394618625]
We conduct the first-ever study exploring the use of graph hypernetworks for predicting parameters of unseen quantized CNN architectures.
We focus on a reduced CNN search space and find that GHN-Q can in fact predict quantization-robust parameters for various 8-bit quantized CNNs.
arXiv Detail & Related papers (2022-08-26T08:00:02Z) - A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
Scene Classification [23.661189257759535]
We present a method to develop low-complexity convolutional neural networks (CNNs) for acoustic scene classification (ASC)
We propose a passive filter pruning framework, where a few convolutional filters from the CNNs are eliminated to yield compressed CNNs.
The proposed method is simple, reduces computations per inference by 27%, with 25% fewer parameters, with less than 1% drop in accuracy.
arXiv Detail & Related papers (2022-03-29T17:00:06Z) - Filter-enhanced MLP is All You Need for Sequential Recommendation [89.0974365344997]
In online platforms, logged user behavior data is inevitable to contain noise.
We borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain.
We propose textbfFMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task.
arXiv Detail & Related papers (2022-02-28T05:49:35Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Fixed-point Quantization of Convolutional Neural Networks for Quantized
Inference on Embedded Platforms [0.9954382983583577]
We propose a method to optimally quantize the weights, biases and activations of each layer of a pre-trained CNN.
We find that layer-wise quantization of parameters significantly helps in this process.
arXiv Detail & Related papers (2021-02-03T17:05:55Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural
Networks [0.0]
We propose a new pruning method called Pruning for Quantization (PfQ) which removes the filters that disturb the fine-tuning of the DNN.
Experiments using well-known models and datasets confirmed that the proposed method achieves higher performance with a similar model size.
arXiv Detail & Related papers (2020-11-13T04:12:54Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Exploring Deep Hybrid Tensor-to-Vector Network Architectures for
Regression Based Speech Enhancement [53.47564132861866]
We find that a hybrid architecture, namely CNN-TT, is capable of maintaining a good quality performance with a reduced model parameter size.
CNN-TT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality.
arXiv Detail & Related papers (2020-07-25T22:21:05Z) - CNN-Based Real-Time Parameter Tuning for Optimizing Denoising Filter
Performance [2.876893463410366]
We propose a novel direction to improve the denoising quality of filtering-based denoising algorithms in real time.
We take the use case of BM3D, the state-of-the-art filtering-based denoising algorithm, to demonstrate and validate our approach.
arXiv Detail & Related papers (2020-01-20T03:46:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.