Multiple Kernel Representation Learning on Networks
- URL: http://arxiv.org/abs/2106.05057v1
- Date: Wed, 9 Jun 2021 13:22:26 GMT
- Title: Multiple Kernel Representation Learning on Networks
- Authors: Abdulkadir Celikkanat and Yanning Shen and Fragkiskos D. Malliaros
- Abstract summary: We propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network.
We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels.
- Score: 12.106994960669924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning representations of nodes in a low dimensional space is a crucial
task with numerous interesting applications in network analysis, including link
prediction, node classification, and visualization. Two popular approaches for
this problem are matrix factorization and random walk-based models. In this
paper, we aim to bring together the best of both worlds, towards learning node
representations. In particular, we propose a weighted matrix factorization
model that encodes random walk-based information about nodes of the network.
The benefit of this novel formulation is that it enables us to utilize kernel
functions without realizing the exact proximity matrix so that it enhances the
expressiveness of existing matrix decomposition methods with kernels and
alleviates their computational complexities. We extend the approach with a
multiple kernel learning formulation that provides the flexibility of learning
the kernel as the linear combination of a dictionary of kernels in data-driven
fashion. We perform an empirical evaluation on real-world networks, showing
that the proposed model outperforms baseline node embedding algorithms in
downstream machine learning tasks.
Related papers
- Fast and Scalable Multi-Kernel Encoder Classifier [4.178980693837599]
The proposed method facilitates fast and scalable kernel matrix embedding, and seamlessly integrates multiple kernels to enhance the learning process.
Our theoretical analysis offers a population-level characterization of this approach using random variables.
arXiv Detail & Related papers (2024-06-04T10:34:40Z) - The Decimation Scheme for Symmetric Matrix Factorization [0.0]
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications.
We study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced.
We introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization.
arXiv Detail & Related papers (2023-07-31T10:53:45Z) - Operator theory, kernels, and Feedforward Neural Networks [0.0]
We show how specific families of positive definite kernels serve as powerful tools in analyses of algorithms for multiple layer feedforward Neural Network models.
Our focus is on particular kernels that adapt well to learning algorithms for data-sets/features which display intrinsic self-similarities at feedforward iteration of scaling.
arXiv Detail & Related papers (2023-01-03T19:30:31Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Joint Embedding Self-Supervised Learning in the Kernel Regime [21.80241600638596]
Self-supervised learning (SSL) produces useful representations of data without access to any labels for classifying the data.
We extend this framework to incorporate algorithms based on kernel methods where embeddings are constructed by linear maps acting on the feature space of a kernel.
We analyze our kernel model on small datasets to identify common features of self-supervised learning algorithms and gain theoretical insights into their performance on downstream tasks.
arXiv Detail & Related papers (2022-09-29T15:53:19Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.