Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges
- URL: http://arxiv.org/abs/2412.18365v1
- Date: Tue, 24 Dec 2024 11:48:41 GMT
- Title: Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges
- Authors: Meixia He, Peican Zhu, Keke Tang, Yangming Guo,
- Abstract summary: Hypergraph Neural Networks (HGNNs) are vulnerable to adversarial attacks.
We present a novel framework, i.e., Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges (IE-Attack)
- Score: 1.089691789591201
- License:
- Abstract: Recent studies have shown that Hypergraph Neural Networks (HGNNs) are vulnerable to adversarial attacks. Existing approaches focus on hypergraph modification attacks guided by gradients, overlooking node spanning in the hypergraph and the group identity of hyperedges, thereby resulting in limited attack performance and detectable attacks. In this manuscript, we present a novel framework, i.e., Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges (IE-Attack), to tackle these challenges. Initially, utilizing the node spanning in the hypergraph, we propose the elite hyperedges sampler to identify hyperedges to be injected. Subsequently, a node generator utilizing Kernel Density Estimation (KDE) is proposed to generate the homogeneous node with the group identity of hyperedges. Finally, by injecting the homogeneous node into elite hyperedges, IE-Attack improves the attack performance and enhances the imperceptibility of attacks. Extensive experiments are conducted on five authentic datasets to validate the effectiveness of IE-Attack and the corresponding superiority to state-of-the-art methods.
Related papers
- HyperSMOTE: A Hypergraph-based Oversampling Approach for Imbalanced Node Classifications [2.172034690069702]
We propose HyperSMOTE as a solution to alleviate the class imbalance issue in hypergraph learning.
We synthesize new nodes based on samples from minority classes and their neighbors.
In order to solve the problem on integrating the new node into the hypergraph, we train a decoder based on the original hypergraph incidence matrix.
arXiv Detail & Related papers (2024-09-09T08:01:28Z) - HYGENE: A Diffusion-based Hypergraph Generation Method [6.997955138726617]
We introduce a diffusion-based Hypergraph Generation (HYGENE) method that addresses challenges through a progressive local expansion approach.
Experiments demonstrated the effectiveness of HYGENE, proving its ability to closely mimic a variety of properties in hypergraphs.
arXiv Detail & Related papers (2024-08-29T11:45:01Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - Enhancing Hyperedge Prediction with Context-Aware Self-Supervised Learning [57.35554450622037]
We propose a novel hyperedge prediction framework (CASH)
CASH employs context-aware node aggregation to capture complex relations among nodes in each hyperedge for (C1) and (2) self-supervised contrastive learning in the context of hyperedge prediction to enhance hypergraph representations for (C2)
Experiments on six real-world hypergraphs reveal that CASH consistently outperforms all competing methods in terms of the accuracy in hyperedge prediction.
arXiv Detail & Related papers (2023-09-11T20:06:00Z) - HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure
Attack of Hypergraph Neural Networks [10.937499142803512]
Hypergraph neural networks (HGNN) have shown superior performance in various deep learning tasks.
Despite the well-studied adversarial attacks on Graph Neural Networks (GNN), there is few study on adversarial attacks against HGNN.
We introduce HyperAttack, the first white-box adversarial attack framework against hypergraph neural networks.
arXiv Detail & Related papers (2023-02-24T02:15:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - HyperEF: Spectral Hypergraph Coarsening by Effective-Resistance
Clustering [7.6146285961466]
This paper introduces a scalable algorithmic framework (HyperEF) for spectral coarsening (decomposition) of large-scale hypergraphs.
Motivated by the latest theoretical framework for low-resistance-diameter decomposition of simple graphs, HyperEF aims at decomposing large hypergraphs into multiple node clusters.
arXiv Detail & Related papers (2022-10-26T16:01:24Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.