GNN-Geo: A Graph Neural Network-based Fine-grained IP geolocation
Framework
- URL: http://arxiv.org/abs/2112.10767v7
- Date: Sat, 15 Apr 2023 01:07:22 GMT
- Title: GNN-Geo: A Graph Neural Network-based Fine-grained IP geolocation
Framework
- Authors: Shichang Ding, Xiangyang Luo, Jinwei Wang, Xiaoming Fu
- Abstract summary: Rule-based fine-grained IP geolocation methods are hard to generalize in computer networks.
We propose a Graph Neural Network (GNN)-based IP geolocation framework named GNN-Geo.
The proposed GNN-Geo clearly outperforms the state-of-art rule-based and learning-based baselines.
- Score: 26.918369615549803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rule-based fine-grained IP geolocation methods are hard to generalize in
computer networks which do not follow hypothetical rules. Recently, deep
learning methods, like multi-layer perceptron (MLP), are tried to increase
generalization capabilities. However, MLP is not so suitable for
graph-structured data like networks. MLP treats IP addresses as isolated
instances and ignores the connection information, which limits geolocation
accuracy. In this work, we research how to increase the generalization
capability with an emerging graph deep learning method -- Graph Neural Network
(GNN). First, IP geolocation is re-formulated as an attributed graph node
regression problem. Then, we propose a GNN-based IP geolocation framework named
GNN-Geo. GNN-Geo consists of a preprocessor, an encoder, messaging passing (MP)
layers and a decoder. The preprocessor and encoder transform measurement data
into the initial node embeddings. MP layers refine the initial node embeddings
by modeling the connection information. The decoder maps the refined embeddings
to nodes' locations and relieves the convergence problem by considering prior
knowledge. The experiments in 8 real-world IPv4/IPv6 networks in North America,
Europe and Asia show the proposed GNN-Geo clearly outperforms the state-of-art
rule-based and learning-based baselines. This work verifies the great potential
of GNN for fine-grained IP geolocation.
Related papers
- Cybercrime Prediction via Geographically Weighted Learning [0.24578723416255752]
We propose a graph neural network model that accounts for geographical latitude and longitudinal points.
Using a synthetically generated dataset, we apply the algorithm for a 4-class classification problem in cybersecurity.
We demonstrate that it has higher accuracy than standard neural networks and convolutional neural networks.
arXiv Detail & Related papers (2024-11-07T11:46:48Z) - Learning State-Augmented Policies for Information Routing in
Communication Networks [92.59624401684083]
We develop a novel State Augmentation (SA) strategy to maximize the aggregate information at source nodes using graph neural network (GNN) architectures.
We leverage an unsupervised learning procedure to convert the output of the GNN architecture to optimal information routing strategies.
In the experiments, we perform the evaluation on real-time network topologies to validate our algorithms.
arXiv Detail & Related papers (2023-09-30T04:34:25Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z) - Geom-GCN: Geometric Graph Convolutional Networks [15.783571061254847]
We propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.
We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
arXiv Detail & Related papers (2020-02-13T00:03:09Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.