Ising on the Graph: Task-specific Graph Subsampling via the Ising Model
- URL: http://arxiv.org/abs/2402.10206v1
- Date: Thu, 15 Feb 2024 18:58:18 GMT
- Title: Ising on the Graph: Task-specific Graph Subsampling via the Ising Model
- Authors: Maria B{\aa}nkestad, Jennifer Andersson, Sebastian Mair, Jens
Sj\"olund
- Abstract summary: We present an approach for subsampling graph structures using an Ising model defined on either the nodes or edges.
Our approach is task-specific as it can learn how to reduce a graph for a specific downstream task in an end-to-end fashion.
- Score: 0.8192907805418581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reducing a graph while preserving its overall structure is an important
problem with many applications. Typically, the reduction approaches either
remove edges (sparsification) or merge nodes (coarsening) in an unsupervised
way with no specific downstream task in mind. In this paper, we present an
approach for subsampling graph structures using an Ising model defined on
either the nodes or edges and learning the external magnetic field of the Ising
model using a graph neural network. Our approach is task-specific as it can
learn how to reduce a graph for a specific downstream task in an end-to-end
fashion. The utilized loss function of the task does not even have to be
differentiable. We showcase the versatility of our approach on three distinct
applications: image segmentation, 3D shape sparsification, and sparse
approximate matrix inverse determination.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Node Copying: A Random Graph Model for Effective Graph Sampling [35.957719744856696]
We introduce the node copying model for constructing a distribution over graphs.
We show the usefulness of the copying model in three tasks.
We employ our proposed model to mitigate the effect of adversarial attacks on the graph topology.
arXiv Detail & Related papers (2022-08-04T04:04:49Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Unsupervised Deep Manifold Attributed Graph Embedding [33.1202078188891]
We propose a novel graph embedding framework named Deep Manifold Attributed Graph Embedding (DMAGE)
A node-to-node geodesic similarity is proposed to compute the inter-node similarity between the data space and the latent space.
We then design a new network structure with fewer aggregation to alleviate the oversmoothing problem.
arXiv Detail & Related papers (2021-04-27T08:47:39Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.