DCT-Conv: Coding filters in convolutional networks with Discrete Cosine
Transform
- URL: http://arxiv.org/abs/2001.08517v4
- Date: Tue, 7 Apr 2020 10:59:07 GMT
- Title: DCT-Conv: Coding filters in convolutional networks with Discrete Cosine
Transform
- Authors: Karol Ch\k{e}ci\'nski, Pawe{\l} Wawrzy\'nski
- Abstract summary: We analyze how switching off selected components of the spectra, thereby reducing the number of trained weights of the network, affects its performance.
Experiments show that coding the filters with trained DCT parameters leads to improvement over traditional convolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks are based on a huge number of trained weights.
Consequently, they are often data-greedy, sensitive to overtraining, and learn
slowly. We follow the line of research in which filters of convolutional neural
layers are determined on the basis of a smaller number of trained parameters.
In this paper, the trained parameters define a frequency spectrum which is
transformed into convolutional filters with Inverse Discrete Cosine Transform
(IDCT, the same is applied in decompression from JPEG). We analyze how
switching off selected components of the spectra, thereby reducing the number
of trained weights of the network, affects its performance. Our experiments
show that coding the filters with trained DCT parameters leads to improvement
over traditional convolution. Also, the performance of the networks modified
this way decreases very slowly with the increasing extent of switching off
these parameters. In some experiments, a good performance is observed when even
99.9% of these parameters are switched off.
Related papers
- Group Orthogonalization Regularization For Vision Models Adaptation and
Robustness [31.43307762723943]
We propose a computationally efficient regularization technique that encourages orthonormality between groups of filters within the same layer.
Our experiments show that when incorporated into recent adaptation methods for diffusion models and vision transformers (ViTs), this regularization improves performance on downstream tasks.
arXiv Detail & Related papers (2023-06-16T17:53:16Z) - Rotation-Scale Equivariant Steerable Filters [1.213915839836187]
Digital histology imaging of biopsy tissue can be captured at arbitrary orientation and magnification and stored at different resolutions.
We propose the Rotation-Scale Equivariant Steerable Filter (RSESF), which incorporates steerable filters and scale-space theory.
Our method outperforms other approaches, with much fewer trainable parameters and fewer GPU resources required.
arXiv Detail & Related papers (2023-04-10T14:13:56Z) - Complexity Reduction of Learned In-Loop Filtering in Video Coding [12.06039429078762]
In video coding, in-loop filters are applied on reconstructed video frames to enhance their perceptual quality, before storing the frames for output.
The proposed method uses a novel combination of sparsity and structured pruning for complexity reduction of learned in-loop filters.
arXiv Detail & Related papers (2022-03-16T14:34:41Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - FILTRA: Rethinking Steerable CNN by Filter Transform [59.412570807426135]
The problem of steerable CNN has been studied from aspect of group representation theory.
We show that kernel constructed by filter transform can also be interpreted in the group representation theory.
This interpretation help complete the puzzle of steerable CNN theory and provides a novel and simple approach to implement steerable convolution operators.
arXiv Detail & Related papers (2021-05-25T03:32:34Z) - Decoupled Dynamic Filter Networks [85.38058820176047]
We propose the Decoupled Dynamic Filter (DDF) that can simultaneously tackle both of these shortcomings.
Inspired by recent advances in attention, DDF decouples a depth-wise dynamic filter into spatial and channel dynamic filters.
We observe a significant boost in performance when replacing standard convolution with DDF in classification networks.
arXiv Detail & Related papers (2021-04-29T04:55:33Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - The Neural Tangent Link Between CNN Denoisers and Non-Local Filters [4.254099382808598]
Convolutional Neural Networks (CNNs) are now a well-established tool for solving computational imaging problems.
We introduce a formal link between such networks through their neural kernel tangent (NTK) and well-known non-local filtering techniques.
We evaluate our findings via extensive image denoising experiments.
arXiv Detail & Related papers (2020-06-03T16:50:54Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.