A Genetic Algorithm based Kernel-size Selection Approach for a
Multi-column Convolutional Neural Network
- URL: http://arxiv.org/abs/1912.12405v2
- Date: Mon, 16 Mar 2020 17:06:44 GMT
- Title: A Genetic Algorithm based Kernel-size Selection Approach for a
Multi-column Convolutional Neural Network
- Authors: Animesh Singh, Sandip Saha, Ritesh Sarkhel, Mahantapas Kundu, Mita
Nasipuri, Nibaran Das
- Abstract summary: We introduce a genetic algorithm-based technique to reduce the efforts of finding the optimal combination of a hyper-parameter ( Kernel size) of a convolutional neural network-based architecture.
The method is evaluated on three popular datasets of different handwritten Bangla characters and digits.
- Score: 11.040847116812046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network-based architectures give promising results in various
domains including pattern recognition. Finding the optimal combination of the
hyper-parameters of such a large-sized architecture is tedious and requires a
large number of laboratory experiments. But, identifying the optimal
combination of a hyper-parameter or appropriate kernel size for a given
architecture of deep learning is always a challenging and tedious task. Here,
we introduced a genetic algorithm-based technique to reduce the efforts of
finding the optimal combination of a hyper-parameter (kernel size) of a
convolutional neural network-based architecture. The method is evaluated on
three popular datasets of different handwritten Bangla characters and digits.
The implementation of the proposed methodology can be found in the following
link: https://github.com/DeepQn/GA-Based-Kernel-Size.
Related papers
- Deep learning for the design of non-Hermitian topolectrical circuits [8.960003862907877]
We introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians.
Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
arXiv Detail & Related papers (2024-02-15T14:41:55Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - Cell nuclei classification in histopathological images using hybrid
OLConvNet [13.858624044986815]
We have proposed a hybrid and flexible deep learning architecture OLConvNet.
$CNN_3L$ reduces the training time by training fewer parameters.
We observed that our proposed model works well and perform better than contemporary complex algorithms.
arXiv Detail & Related papers (2022-02-21T12:39:37Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Universality and Optimality of Structured Deep Kernel Networks [0.0]
Kernel based methods yield approximation models that are flexible, efficient and powerful.
Recent success of machine learning methods has been driven by deep neural networks (NNs)
In this paper, we show that the use of special types of kernels yield models reminiscent of neural networks.
arXiv Detail & Related papers (2021-05-15T14:10:35Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.