Semi-supervised Learning for Aggregated Multilayer Graphs Using Diffuse
Interface Methods and Fast Matrix Vector Products
- URL: http://arxiv.org/abs/2007.05239v3
- Date: Fri, 12 Mar 2021 13:15:50 GMT
- Title: Semi-supervised Learning for Aggregated Multilayer Graphs Using Diffuse
Interface Methods and Fast Matrix Vector Products
- Authors: Kai Bergermann, Martin Stoll, Toni Volkmer
- Abstract summary: We generalize a graph-based semi-supervised classification technique based on diffuse interface methods to multilayer graphs.
We present a very flexible approach that interprets high-dimensional data in a low-dimensional multilayer graph representation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We generalize a graph-based multiclass semi-supervised classification
technique based on diffuse interface methods to multilayer graphs. Besides the
treatment of various applications with an inherent multilayer structure, we
present a very flexible approach that interprets high-dimensional data in a
low-dimensional multilayer graph representation. Highly efficient numerical
methods involving the spectral decomposition of the corresponding differential
graph operators as well as fast matrix-vector products based on the
nonequispaced fast Fourier transform (NFFT) enable the rapid treatment of large
and high-dimensional data sets. We perform various numerical tests putting a
special focus on image segmentation. In particular, we test the performance of
our method on data sets with up to 10 million nodes per layer as well as up to
104 dimensions resulting in graphs with up to 52 layers. While all presented
numerical experiments can be run on an average laptop computer, the linear
dependence per iteration step of the runtime on the network size in all stages
of our algorithm makes it scalable to even larger and higher-dimensional
problems.
Related papers
- Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering [13.638434337947302]
FSSMSC is a novel solution to the high computational complexity commonly found in existing approaches.
The method generates a consensus anchor graph across all views, representing each data point as a sparse linear combination of chosen landmarks.
The effectiveness and efficiency of FSSMSC are validated through extensive experiments on multiple benchmark datasets of varying scales.
arXiv Detail & Related papers (2024-08-11T06:54:00Z) - Sparse Training of Discrete Diffusion Models for Graph Generation [45.103518022696996]
We introduce SparseDiff, a novel diffusion model based on the observation that almost all large graphs are sparse.
By selecting a subset of edges, SparseDiff effectively leverages sparse graph representations both during the noising process and within the denoising network.
Our model demonstrates state-of-the-art performance across multiple metrics on both small and large datasets.
arXiv Detail & Related papers (2023-11-03T16:50:26Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Multiple Graph Learning for Scalable Multi-view Clustering [26.846642220480863]
We propose an efficient multiple graph learning model via a small number of anchor points and tensor Schatten p-norm minimization.
Specifically, we construct a hidden and tractable large graph by anchor graph for each view.
We develop an efficient algorithm, which scales linearly with the data size, to solve our proposed model.
arXiv Detail & Related papers (2021-06-29T13:10:56Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Massively Parallel Graph Drawing and Representation Learning [13.736789987448466]
Graph embedding, i.e. converting the vertices of a graph into numerical vectors, is a data mining task of high importance.
We propose MulticoreGEMPE, an information-theoretic method which can generate low and high-dimensional vectors.
arXiv Detail & Related papers (2020-11-06T17:18:14Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.