Learning the Right Layers: a Data-Driven Layer-Aggregation Strategy for
Semi-Supervised Learning on Multilayer Graphs
- URL: http://arxiv.org/abs/2306.00152v1
- Date: Wed, 31 May 2023 19:50:11 GMT
- Title: Learning the Right Layers: a Data-Driven Layer-Aggregation Strategy for
Semi-Supervised Learning on Multilayer Graphs
- Authors: Sara Venturini, Andrea Cristofari, Francesco Rinaldi, Francesco
Tudisco
- Abstract summary: Clustering (or community detection) on multilayer graphs poses several additional complications.
One of the major challenges is to establish the extent to which each layer contributes to the cluster iteration assignment.
We propose a parameter-free Laplacian-regularized model that learns an optimal nonlinear combination of the different layers from the available input labels.
- Score: 2.752817022620644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clustering (or community detection) on multilayer graphs poses several
additional complications with respect to standard graphs as different layers
may be characterized by different structures and types of information. One of
the major challenges is to establish the extent to which each layer contributes
to the cluster assignment in order to effectively take advantage of the
multilayer structure and improve upon the classification obtained using the
individual layers or their union. However, making an informed a-priori
assessment about the clustering information content of the layers can be very
complicated. In this work, we assume a semi-supervised learning setting, where
the class of a small percentage of nodes is initially provided, and we propose
a parameter-free Laplacian-regularized model that learns an optimal nonlinear
combination of the different layers from the available input labels. The
learning algorithm is based on a Frank-Wolfe optimization scheme with inexact
gradient, combined with a modified Label Propagation iteration. We provide a
detailed convergence analysis of the algorithm and extensive experiments on
synthetic and real-world datasets, showing that the proposed method compares
favourably with a variety of baselines and outperforms each individual layer
when used in isolation.
Related papers
- Exploring Selective Layer Fine-Tuning in Federated Learning [48.470385357429215]
Federated learning (FL) has emerged as a promising paradigm for fine-tuning foundation models using distributed data.
We study selective layer fine-tuning in FL, emphasizing a flexible approach that allows the clients to adjust their selected layers according to their local data and resources.
arXiv Detail & Related papers (2024-08-28T07:48:39Z) - LayerMatch: Do Pseudo-labels Benefit All Layers? [77.59625180366115]
Semi-supervised learning offers a promising solution to mitigate the dependency of labeled data.
We develop two layer-specific pseudo-label strategies, termed Grad-ReLU and Avg-Clustering.
Our approach consistently demonstrates exceptional performance on standard semi-supervised learning benchmarks.
arXiv Detail & Related papers (2024-06-20T11:25:50Z) - Towards Optimal Customized Architecture for Heterogeneous Federated
Learning with Contrastive Cloud-Edge Model Decoupling [20.593232086762665]
Federated learning, as a promising distributed learning paradigm, enables collaborative training of a global model across multiple network edge clients without the need for central data collecting.
We propose a novel federated learning framework called FedCMD, a model decoupling tailored to the Cloud-edge supported federated learning.
Our motivation is that, by the deep investigation of the performance of selecting different neural network layers as the personalized head, we found rigidly assigning the last layer as the personalized head in current studies is not always optimal.
arXiv Detail & Related papers (2024-03-04T05:10:28Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Learn Layer-wise Connections in Graph Neural Networks [12.363386808994079]
We propose a framework LLC (Learn Layer-wise Connections) based on neural architecture search (NAS) to learn adaptive connections among intermediate layers in GNNs.
LLC contains one novel search space which consists of 3 types of blocks and learnable connections, and one differentiable search algorithm to enable the efficient search process.
Extensive experiments on five real-world datasets are conducted, and the results show that the searched layer-wise connections can not only improve the performance but also alleviate the over-smoothing problem.
arXiv Detail & Related papers (2021-12-27T09:33:22Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Clustering multilayer graphs with missing nodes [4.007017852999008]
Clustering is a fundamental problem in network analysis where the goal is to regroup nodes with similar connectivity profiles.
We propose a new framework that allows for layers to be defined on different sets of nodes.
arXiv Detail & Related papers (2021-03-04T18:56:59Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Learning Multi-layer Graphs and a Common Representation for Clustering [13.90938823562779]
We focus on graph learning from multi-view data of shared entities for spectral clustering.
We propose an efficient solver based on alternating minimization to solve the problem.
Numerical experiments on synthetic and real datasets demonstrate that the proposed algorithm outperforms state-of-the-art multi-view clustering techniques.
arXiv Detail & Related papers (2020-10-23T11:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.