Hierarchical and Incremental Structural Entropy Minimization for
Unsupervised Social Event Detection
- URL: http://arxiv.org/abs/2312.11891v1
- Date: Tue, 19 Dec 2023 06:28:32 GMT
- Title: Hierarchical and Incremental Structural Entropy Minimization for
Unsupervised Social Event Detection
- Authors: Yuwei Cao, Hao Peng, Zhengtao Yu, Philip S. Yu
- Abstract summary: Graph neural network (GNN)-based methods enable a fusion of natural language semantics and the complex social network structural information.
In this work, we address social event detection via graph structural entropy (SE) minimization.
While keeping the merits of the GNN-based methods, the proposed framework, HISEvent, constructs more informative message graphs.
- Score: 61.87480191351659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a trending approach for social event detection, graph neural network
(GNN)-based methods enable a fusion of natural language semantics and the
complex social network structural information, thus showing SOTA performance.
However, GNN-based methods can miss useful message correlations. Moreover, they
require manual labeling for training and predetermining the number of events
for prediction. In this work, we address social event detection via graph
structural entropy (SE) minimization. While keeping the merits of the GNN-based
methods, the proposed framework, HISEvent, constructs more informative message
graphs, is unsupervised, and does not require the number of events given a
priori. Specifically, we incrementally explore the graph neighborhoods using
1-dimensional (1D) SE minimization to supplement the existing message graph
with edges between semantically related messages. We then detect events from
the message graph by hierarchically minimizing 2-dimensional (2D) SE. Our
proposed 1D and 2D SE minimization algorithms are customized for social event
detection and effectively tackle the efficiency problem of the existing SE
minimization algorithms. Extensive experiments show that HISEvent consistently
outperforms GNN-based methods and achieves the new SOTA for social event
detection under both closed- and open-set settings while being efficient and
robust.
Related papers
- Multitask Active Learning for Graph Anomaly Detection [48.690169078479116]
We propose a novel MultItask acTIve Graph Anomaly deTEction framework, namely MITIGATE.
By coupling node classification tasks, MITIGATE obtains the capability to detect out-of-distribution nodes without known anomalies.
Empirical studies on four datasets demonstrate that MITIGATE significantly outperforms the state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2024-01-24T03:43:45Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - ABC: Aggregation before Communication, a Communication Reduction
Framework for Distributed Graph Neural Network Training and Effective
Partition [0.0]
Graph Neural Networks (GNNs) are neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data.
In this paper, we study the communication complexity during distributed GNNs training.
We show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown of the graph-changing process.
arXiv Detail & Related papers (2022-12-11T04:54:01Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Distributed Graph Neural Network Training with Periodic Historical
Embedding Synchronization [9.503080586294406]
Graph Neural Networks (GNNs) are prevalent in various applications such as social network, recommender systems, and knowledge graphs.
Traditional sampling-based methods accelerate GNN by dropping edges and nodes, which impairs the graph integrity and model performance.
This paper proposes DIstributed Graph Embedding SynchronizaTion (DIGEST), a novel distributed GNN training framework.
arXiv Detail & Related papers (2022-05-31T18:44:53Z) - Evidential Temporal-aware Graph-based Social Event Detection via
Dempster-Shafer Theory [76.4580340399321]
We propose ETGNN, a novel Evidential Temporal-aware Graph Neural Network.
We construct view-specific graphs whose nodes are the texts and edges are determined by several types of shared elements respectively.
Considering the view-specific uncertainty, the representations of all views are converted into mass functions through evidential deep learning (EDL) neural networks.
arXiv Detail & Related papers (2022-05-24T16:22:40Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.