GraphOpt: Learning Optimization Models of Graph Formation
- URL: http://arxiv.org/abs/2007.03619v1
- Date: Tue, 7 Jul 2020 16:51:39 GMT
- Title: GraphOpt: Learning Optimization Models of Graph Formation
- Authors: Rakshit Trivedi, Jiachen Yang, Hongyuan Zha
- Abstract summary: We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
- Score: 72.75384705298303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formation mechanisms are fundamental to the study of complex networks, but
learning them from observations is challenging. In real-world domains, one
often has access only to the final constructed graph, instead of the full
construction process, and observed graphs exhibit complex structural
properties. In this work, we propose GraphOpt, an end-to-end framework that
jointly learns an implicit model of graph structure formation and discovers an
underlying optimization mechanism in the form of a latent objective function.
The learned objective can serve as an explanation for the observed graph
properties, thereby lending itself to transfer across different graphs within a
domain. GraphOpt poses link formation in graphs as a sequential decision-making
process and solves it using maximum entropy inverse reinforcement learning
algorithm. Further, it employs a novel continuous latent action space that aids
scalability. Empirically, we demonstrate that GraphOpt discovers a latent
objective transferable across graphs with different characteristics. GraphOpt
also learns a robust stochastic policy that achieves competitive link
prediction performance without being explicitly trained on this task and
further enables construction of graphs with properties similar to those of the
observed graph.
Related papers
- Motif-Consistent Counterfactuals with Adversarial Refinement for Graph-Level Anomaly Detection [30.618065157205507]
We propose a novel approach, Motif-consistent Counterfactuals with Adversarial Refinement (MotifCAR) for graph-level anomaly detection.
The model combines the motif of one graph, the core subgraph containing the identification (category) information, and the contextual subgraph of another graph to produce a raw counterfactual graph.
MotifCAR can generate high-quality counterfactual graphs.
arXiv Detail & Related papers (2024-07-18T08:04:57Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - SynGraphy: Succinct Summarisation of Large Networks via Small Synthetic
Representative Graphs [4.550112751061436]
We describe SynGraphy, a method for visually summarising the structure of large network datasets.
It works by drawing smaller graphs generated to have similar structural properties to the input graphs.
arXiv Detail & Related papers (2023-02-15T16:00:15Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint [15.577175610442351]
We propose a novel graph learning framework that incorporates the node-side and observation-side information.
We use graph signals as functions in the reproducing kernel Hilbert space associated with a Kronecker product kernel.
We develop a novel graph-based regularisation method which, when combined with the Kronecker product kernel, enables our model to capture both the dependency explained by the graph and the dependency due to graph signals.
arXiv Detail & Related papers (2020-08-23T16:04:23Z) - Goal-directed graph construction using reinforcement learning [3.291429094499946]
We formulate the construction of a graph as a decision-making process in which a central agent creates topologies by trial and error.
We propose an algorithm based on reinforcement learning and graph neural networks to learn graph construction and improvement strategies.
arXiv Detail & Related papers (2020-01-30T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.