Heterophily-informed Message Passing
- URL: http://arxiv.org/abs/2504.19785v1
- Date: Mon, 28 Apr 2025 13:28:23 GMT
- Title: Heterophily-informed Message Passing
- Authors: Haishan Wang, Arno Solin, Vikas Garg,
- Abstract summary: Graph neural networks (GNNs) are known to be vulnerable to oversmoothing due to their implicit homophily assumption.<n>We mitigate this problem with a novel scheme that regulates the aggregation of messages.<n>Our approach relies solely on learnt embeddings, obviating the need for auxiliary labels.
- Score: 16.73251866177758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are known to be vulnerable to oversmoothing due to their implicit homophily assumption. We mitigate this problem with a novel scheme that regulates the aggregation of messages, modulating the type and extent of message passing locally thereby preserving both the low and high-frequency components of information. Our approach relies solely on learnt embeddings, obviating the need for auxiliary labels, thus extending the benefits of heterophily-aware embeddings to broader applications, e.g., generative modelling. Our experiments, conducted across various data sets and GNN architectures, demonstrate performance enhancements and reveal heterophily patterns across standard classification benchmarks. Furthermore, application to molecular generation showcases notable performance improvements on chemoinformatics benchmarks.
Related papers
- FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning [12.834423184614849]
Subgraph federated learning (subgraph-FL) facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs.
node and topology variation leads to significant differences in the class-wise knowledge reliability of multiple local GNNs.
We propose topology-aware data-free knowledge distillation technology (FedTAD) to enhance reliable knowledge transfer from the local model to the global model.
arXiv Detail & Related papers (2024-04-22T10:19:02Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - HINormer: Representation Learning On Heterogeneous Information Networks
with Graph Transformer [29.217820912610602]
Graph Transformers (GTs) have been proposed which work in the paradigm that allows message passing to a larger coverage even across the whole graph.
The investigation of GTs on heterogeneous information networks (HINs) is still under-exploited.
We propose a novel model named HINormer, which capitalizes on a larger-range aggregation mechanism for node representation learning.
arXiv Detail & Related papers (2023-02-22T12:25:07Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Stochastic Aggregation in Graph Neural Networks [9.551282469099887]
Graph neural networks (GNNs) manifest pathologies including over-smoothing and limited power discriminating.
We present a unifying framework for aggregation (STAG) in GNNs, where noise is (adaptively) injected into the aggregation process from the neighborhood to form node embeddings.
arXiv Detail & Related papers (2021-02-25T02:52:03Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z) - Adaptive Universal Generalized PageRank Graph Neural Network [36.850433364139924]
Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility.
We introduce a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights.
GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
arXiv Detail & Related papers (2020-06-14T19:27:39Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.