Prefetching Cache Optimization Using Graph Neural Networks: A Modular Framework and Conceptual Analysis
- URL: http://arxiv.org/abs/2510.21865v1
- Date: Thu, 23 Oct 2025 10:35:35 GMT
- Title: Prefetching Cache Optimization Using Graph Neural Networks: A Modular Framework and Conceptual Analysis
- Authors: F. I. Qowy,
- Abstract summary: This paper introduces a modular framework leveraging Graph Neural Networks (GNNs) to model and predict access patterns within graph-structured data.<n>We provide a detailed conceptual analysis showing how GNN-based approaches can outperform conventional methods by learning intricate dependencies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Caching and prefetching techniques are fundamental to modern computing, serving to bridge the growing performance gap between processors and memory. Traditional prefetching strategies are often limited by their reliance on predefined heuristics or simplified statistical models, which fail to capture the complex, non-linear dependencies in modern data access patterns. This paper introduces a modular framework leveraging Graph Neural Networks (GNNs) to model and predict access patterns within graph-structured data, focusing on web navigation and hierarchical file systems. The toolchain consists of: a route mapper for extracting structural information, a graph constructor for creating graph representations, a walk session generator for simulating user behaviors, and a gnn prefetch module for training and inference. We provide a detailed conceptual analysis showing how GNN-based approaches can outperform conventional methods by learning intricate dependencies. This work offers both theoretical foundations and a practical, replicable pipeline for future research in graph-driven systems optimization.
Related papers
- Enhancing Explainability of Graph Neural Networks Through Conceptual and Structural Analyses and Their Extensions [0.9645196221785692]
Graph Neural Networks (GNNs) have become a powerful tool for modeling and analyzing data with graph structures.<n>Current Explainable AI (XAI) methods struggle to untangle the intricate relationships and interactions within graphs.<n>This thesis seeks to develop a novel XAI framework tailored for graph-based machine learning.
arXiv Detail & Related papers (2025-12-09T08:13:31Z) - GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - Resource-Aware Neural Network Pruning Using Graph-based Reinforcement Learning [0.8890833546984916]
This paper presents a novel approach to neural network pruning by integrating a graph-based observation space into an AutoML framework.<n>Our framework transforms the pruning process by introducing a graph representation of the target neural network.<n>For the action space we transition from continuous pruning ratios to fine-grained binary action spaces.
arXiv Detail & Related papers (2025-09-04T15:05:05Z) - A Graph Laplacian Eigenvector-based Pre-training Method for Graph Neural Networks [7.359145401513628]
Structure-based pre-training methods are under-explored yet crucial for downstream applications which rely on underlying graph structure.<n>We propose the Laplacian Eigenvector Learning Module (LELM), a novel pre-training module for graph neural networks (GNNs) based on predicting the low-frequency eigenvectors of the graph Laplacian.<n>LELM introduces a novel architecture that overcomes oversmoothing, allowing the GNN model to learn long-range interdependencies.
arXiv Detail & Related papers (2025-09-02T20:07:20Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Self-Supervised Contrastive Graph Clustering Network via Structural Information Fusion [15.293684479404092]
We propose a novel deep graph clustering method called CGCN.
Our approach introduces contrastive signals and deep structural information into the pre-training process.
Our method has been experimentally validated on multiple real-world graph datasets.
arXiv Detail & Related papers (2024-08-08T09:49:26Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Ring Reservoir Neural Networks for Graphs [15.07984894938396]
Reservoir Computing models can play an important role in developing fruitful graph embeddings.
Our core proposal is based on shaping the organization of the hidden neurons to follow a ring topology.
Experimental results on graph classification tasks indicate that ring-reservoirs architectures enable particularly effective network configurations.
arXiv Detail & Related papers (2020-05-11T17:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.