Knowledge Propagation over Conditional Independence Graphs
- URL: http://arxiv.org/abs/2308.05857v1
- Date: Thu, 10 Aug 2023 21:06:18 GMT
- Title: Knowledge Propagation over Conditional Independence Graphs
- Authors: Urszula Chajewska, Harsh Shrivastava
- Abstract summary: Conditional Independence (CI) graphs capture dependence between features.
We propose algorithms for performing knowledge propagation over the CI graphs.
Our experiments demonstrate that our techniques improve upon the state-of-the-art on the publicly available Cora and PubMed datasets.
- Score: 2.692919446383274
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conditional Independence (CI) graph is a special type of a Probabilistic
Graphical Model (PGM) where the feature connections are modeled using an
undirected graph and the edge weights show the partial correlation strength
between the features. Since the CI graphs capture direct dependence between
features, they have been garnering increasing interest within the research
community for gaining insights into the systems from various domains, in
particular discovering the domain topology. In this work, we propose algorithms
for performing knowledge propagation over the CI graphs. Our experiments
demonstrate that our techniques improve upon the state-of-the-art on the
publicly available Cora and PubMed datasets.
Related papers
- Rank and Align: Towards Effective Source-free Graph Domain Adaptation [16.941755478093153]
Graph neural networks (GNNs) have achieved impressive performance in graph domain adaptation.
However, extensive source graphs could be unavailable in real-world scenarios due to privacy and storage concerns.
We introduce a novel GNN-based approach called Rank and Align (RNA), which ranks graph similarities with spectral seriation for robust semantics learning.
arXiv Detail & Related papers (2024-08-22T08:00:50Z) - Graph External Attention Enhanced Transformer [20.44782028691701]
We propose Graph External Attention (GEA) -- a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly.
On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET)
Experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance.
arXiv Detail & Related papers (2024-05-31T17:50:27Z) - Continuous Product Graph Neural Networks [5.703629317205571]
Multidomain data defined on multiple graphs holds significant potential in practical applications in computer science.
We introduce Continuous Product Graph Neural Networks (CITRUS) that emerge as a natural solution to the TPDEG.
We evaluate CITRUS on well-known traffic andtemporal weather forecasting datasets, demonstrating superior performance over existing approaches.
arXiv Detail & Related papers (2024-05-29T08:36:09Z) - Message Intercommunication for Inductive Relation Reasoning [49.731293143079455]
We develop a novel inductive relation reasoning model called MINES.
We introduce a Message Intercommunication mechanism on the Neighbor-Enhanced Subgraph.
Our experiments show that MINES outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2023-05-23T13:51:46Z) - Methods for Recovering Conditional Independence Graphs: A Survey [2.2721854258621064]
Conditional Independence (CI) graphs are used to gain insights about feature relationships.
We list out different methods and study the advances in techniques developed to recover CI graphs.
arXiv Detail & Related papers (2022-11-13T06:11:38Z) - Causally-guided Regularization of Graph Attention Improves
Generalizability [69.09877209676266]
We introduce CAR, a general-purpose regularization framework for graph attention networks.
Methodname aligns the attention mechanism with the causal effects of active interventions on graph connectivity.
For social media network-sized graphs, a CAR-guided graph rewiring approach could allow us to combine the scalability of graph convolutional methods with the higher performance of graph attention.
arXiv Detail & Related papers (2022-10-20T01:29:10Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Spectral-Spatial Global Graph Reasoning for Hyperspectral Image
Classification [50.899576891296235]
Convolutional neural networks have been widely applied to hyperspectral image classification.
Recent methods attempt to address this issue by performing graph convolutions on spatial topologies.
arXiv Detail & Related papers (2021-06-26T06:24:51Z) - Multi Scale Temporal Graph Networks For Skeleton-based Action
Recognition [5.970574258839858]
Graph convolutional networks (GCNs) can effectively capture the features of related nodes and improve the performance of the model.
Existing methods based on GCNs have two problems. First, the consistency of temporal and spatial features is ignored for extracting features node by node and frame by frame.
We propose a novel model called Temporal Graph Networks (TGN) for action recognition.
arXiv Detail & Related papers (2020-12-05T08:08:25Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.