Introducing Graph Learning over Polytopic Uncertain Graph
- URL: http://arxiv.org/abs/2404.08176v1
- Date: Fri, 12 Apr 2024 00:55:07 GMT
- Title: Introducing Graph Learning over Polytopic Uncertain Graph
- Authors: Masako Kishida, Shunsuke Ono,
- Abstract summary: This abstract introduces a class of graph learning applicable to cases where the underlying graph has polytopic uncertainty.
By incorporating this assumption that the graph lies in a polytopic set into two established graph learning frameworks, we find that our approach yields better results with less computation.
- Score: 7.165583602747691
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This extended abstract introduces a class of graph learning applicable to cases where the underlying graph has polytopic uncertainty, i.e., the graph is not exactly known, but its parameters or properties vary within a known range. By incorporating this assumption that the graph lies in a polytopic set into two established graph learning frameworks, we find that our approach yields better results with less computation.
Related papers
- Multi-View Graph Learning with Graph-Tuple [13.991938982402807]
We introduce a multi-view graph-tuple framework for efficient Graph Neural Networks (GNNs)<n>Instead of a single graph, our graph-tuple framework partitions the graph into disjoint subgraphs, capturing primary local interactions and weaker, long-range connections.<n>We instantiate our framework on two scientific domains: molecular property prediction from feature-scarce Coulomb matrices and cosmological parameter inference from geometric point clouds.
arXiv Detail & Related papers (2025-10-11T20:57:03Z) - Learning Kronecker-Structured Graphs from Smooth Signals [8.594140167290098]
Graph learning, or network inference, is a prominent problem in graph signal processing (GSP)<n>We propose an alternating non-structured problem Cartesian product scheme to tackle this graph learning problem.<n>We conduct experiments and demonstrate our approach's efficacy and superior performance compared to existing methods.
arXiv Detail & Related papers (2025-05-14T21:53:37Z) - Graph data augmentation with Gromow-Wasserstein Barycenters [0.0]
It has been proposed a novel augmentation strategy for graphs that operates in a non-Euclidean space.
A non-Euclidean distance, specifically the Gromow-Wasserstein distance, results in better approximations of the graphon.
This framework also provides a means to validate different graphon estimation approaches.
arXiv Detail & Related papers (2024-04-12T10:22:55Z) - PlanE: Representation Learning over Planar Graphs [9.697671872347131]
This work is inspired by the classical planar graph isomorphism algorithm of Hopcroft and Tarjan.
PlanE includes architectures which can learn complete invariants over planar graphs while remaining practically scalable.
arXiv Detail & Related papers (2023-07-03T17:45:01Z) - Expectation-Complete Graph Representations with Homomorphisms [5.939858158928473]
We are interested in efficient alternatives that become arbitrarily expressive with increasing resources.
Our approach is based on Lov'asz' characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts.
arXiv Detail & Related papers (2023-06-09T12:12:07Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Topological characterisation of Weisfeiler-Leman equivalence classes [0.0]
Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
arXiv Detail & Related papers (2022-06-23T17:28:55Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Generating a Doppelganger Graph: Resembling but Distinct [5.618335078130568]
We propose an approach to generating a doppelganger graph that resembles a given one in many graph properties.
The approach is an orchestration of graph representation learning, generative adversarial networks, and graph realization algorithms.
arXiv Detail & Related papers (2021-01-23T22:08:27Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.