Deep Neural-Kernel Machines
- URL: http://arxiv.org/abs/2007.06655v2
- Date: Sun, 19 Jul 2020 11:03:51 GMT
- Title: Deep Neural-Kernel Machines
- Authors: Siamak Mehrkanoon
- Abstract summary: In this chapter we review the main literature related to the recent advancement of deep neural-kernel architecture.
We introduce a neural- Kernel architecture that serves as the core module for deeper models equipped with different pooling layers.
In particular, we review three neural- Kernel machines with average, maxout and convolutional pooling layers.
- Score: 4.213427823201119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this chapter we review the main literature related to the recent
advancement of deep neural-kernel architecture, an approach that seek the
synergy between two powerful class of models, i.e. kernel-based models and
artificial neural networks. The introduced deep neural-kernel framework is
composed of a hybridization of the neural networks architecture and a kernel
machine. More precisely, for the kernel counterpart the model is based on Least
Squares Support Vector Machines with explicit feature mapping. Here we discuss
the use of one form of an explicit feature map obtained by random Fourier
features. Thanks to this explicit feature map, in one hand bridging the two
architectures has become more straightforward and on the other hand one can
find the solution of the associated optimization problem in the primal,
therefore making the model scalable to large scale datasets. We begin by
introducing a neural-kernel architecture that serves as the core module for
deeper models equipped with different pooling layers. In particular, we review
three neural-kernel machines with average, maxout and convolutional pooling
layers. In average pooling layer the outputs of the previous representation
layers are averaged. The maxout layer triggers competition among different
input representations and allows the formation of multiple sub-networks within
the same model. The convolutional pooling layer reduces the dimensionality of
the multi-scale output representations. Comparison with neural-kernel model,
kernel based models and the classical neural networks architecture have been
made and the numerical experiments illustrate the effectiveness of the
introduced models on several benchmark datasets.
Related papers
- Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation [5.8407437499182935]
This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate.
In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam'e equations.
arXiv Detail & Related papers (2024-05-18T07:32:02Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Multigrid-Augmented Deep Learning Preconditioners for the Helmholtz
Equation using Compact Implicit Layers [7.56372030029358]
We present a deep learning-based iterative approach to solve the discrete heterogeneous Helmholtz equation for high wavenumbers.
We construct a multilevel U-Net-like encoder-solver CNN with an implicit layer on the coarsest grid of the U-Net, where convolution kernels are inverted.
Our architecture can be used to generalize over different slowness models of various difficulties and is efficient at solving for many right-hand sides per slowness model.
arXiv Detail & Related papers (2023-06-30T08:56:51Z) - Deep Dependency Networks for Multi-Label Classification [24.24496964886951]
We show that the performance of previous approaches that combine Markov Random Fields with neural networks can be modestly improved.
We propose a new modeling framework called deep dependency networks, which augments a dependency network.
Despite its simplicity, jointly learning this new architecture yields significant improvements in performance.
arXiv Detail & Related papers (2023-02-01T17:52:40Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - Generalized Leverage Score Sampling for Neural Networks [82.95180314408205]
Leverage score sampling is a powerful technique that originates from theoretical computer science.
In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels.
arXiv Detail & Related papers (2020-09-21T14:46:01Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.