CONet: Channel Optimization for Convolutional Neural Networks
- URL: http://arxiv.org/abs/2108.06822v1
- Date: Sun, 15 Aug 2021 21:48:25 GMT
- Title: CONet: Channel Optimization for Convolutional Neural Networks
- Authors: Mahdi S. Hosseini, Jia Shu Zhang, Zhe Liu, Andre Fu, Jingxuan Su,
Mathieu Tuli and Konstantinos N. Plataniotis
- Abstract summary: We study channel size optimization in convolutional neural networks (CNN)
We introduce an efficient dynamic scaling algorithm -- CONet -- that automatically optimize channel sizes across network layers for a given CNN.
We conduct experiments on CIFAR10/100 and ImageNet datasets and show that CONet can find efficient and accurate architectures searched in ResNet, DARTS, and DARTS+ spaces.
- Score: 33.58529066005248
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Architecture Search (NAS) has shifted network design from using human
intuition to leveraging search algorithms guided by evaluation metrics. We
study channel size optimization in convolutional neural networks (CNN) and
identify the role it plays in model accuracy and complexity. Current channel
size selection methods are generally limited by discrete sample spaces while
suffering from manual iteration and simple heuristics. To solve this, we
introduce an efficient dynamic scaling algorithm -- CONet -- that automatically
optimizes channel sizes across network layers for a given CNN. Two metrics --
``\textit{Rank}" and "\textit{Rank Average Slope}" -- are introduced to
identify the information accumulated in training. The algorithm dynamically
scales channel sizes up or down over a fixed searching phase. We conduct
experiments on CIFAR10/100 and ImageNet datasets and show that CONet can find
efficient and accurate architectures searched in ResNet, DARTS, and DARTS+
spaces that outperform their baseline models.
Related papers
- Testing the Channels of Convolutional Neural Networks [8.927538538637783]
We propose techniques for testing the channels of convolutional neural networks (CNNs)
We design FtGAN, an extension to GAN, that can generate test data with varying the intensity of a channel of a target CNN.
We also proposed a channel selection algorithm to find representative channels for testing.
arXiv Detail & Related papers (2023-03-06T09:58:39Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - CONetV2: Efficient Auto-Channel Size Optimization for CNNs [35.951376988552695]
This work introduces a method that is efficient in computationally constrained environments by examining the micro-search space of channel size.
In tackling channel-size optimization, we design an automated algorithm to extract the dependencies within different connected layers of the network.
We also introduce a novel metric that highly correlates with test accuracy and enables analysis of individual network layers.
arXiv Detail & Related papers (2021-10-13T16:17:19Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - Genetic U-Net: Automatically Designed Deep Networks for Retinal Vessel
Segmentation Using a Genetic Algorithm [2.6629444004809826]
Genetic U-Net is proposed to generate a U-shaped convolutional neural network (CNN) that can achieve better retinal vessel segmentation but with fewer architecture-based parameters.
The experimental results show that the architecture obtained using the proposed method offered a superior performance with less than 1% of the number of the original U-Net parameters in particular.
arXiv Detail & Related papers (2020-10-29T13:31:36Z) - AutoPruning for Deep Neural Network with Dynamic Channel Masking [28.018077874687343]
We propose a learning based auto pruning algorithm for deep neural network.
A two objectives' problem that aims for the the weights and the best channels for each layer is first formulated.
An alternative optimization approach is then proposed to derive the optimal channel numbers and weights simultaneously.
arXiv Detail & Related papers (2020-10-22T20:12:46Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.