RLGNet: Repeating-Local-Global History Network for Temporal Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2404.00586v1
- Date: Sun, 31 Mar 2024 07:19:29 GMT
- Title: RLGNet: Repeating-Local-Global History Network for Temporal Knowledge Graph Reasoning
- Authors: Ao Lv, Yongzhong Huang, Guige Ouyang, Yue Chen, Haoran Xie,
- Abstract summary: Temporal Knowledge Graph (TKG) reasoning is based on historical information to predict the future.
Most existing methods fail to concurrently address and comprehend historical information from both global and local perspectives.
We propose textbfRepetitive-textbfLocal-textbfGlobal History textbfNetwork(RLGNet)
- Score: 9.576427721924533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Knowledge Graph (TKG) reasoning is based on historical information to predict the future. Therefore, parsing and mining historical information is key to predicting the future. Most existing methods fail to concurrently address and comprehend historical information from both global and local perspectives. Neglecting the global view might result in overlooking macroscopic trends and patterns, while ignoring the local view can lead to missing critical detailed information. Additionally, some methods do not focus on learning from high-frequency repeating events, which means they may not fully grasp frequently occurring historical events. To this end, we propose the \textbf{R}epetitive-\textbf{L}ocal-\textbf{G}lobal History \textbf{Net}work(RLGNet). We utilize a global history encoder to capture the overarching nature of historical information. Subsequently, the local history encoder provides information related to the query timestamp. Finally, we employ the repeating history encoder to identify and learn from frequently occurring historical events. In the evaluation on six benchmark datasets, our approach generally outperforms existing TKG reasoning models in multi-step and single-step reasoning tasks.
Related papers
- A Wireless Foundation Model for Multi-Task Prediction [50.21098141769079]
We propose a unified foundation model for multi-task prediction in wireless networks that supports diverse prediction intervals.<n>After trained on large-scale datasets, the proposed foundation model demonstrates strong generalization to unseen scenarios and zero-shot performance on new tasks.
arXiv Detail & Related papers (2025-07-08T12:37:55Z) - A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.
We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)
We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - RS-GPT4V: A Unified Multimodal Instruction-Following Dataset for Remote Sensing Image Understanding [4.266920365127677]
Under the new LaGD paradigm, the old datasets are no longer suitable for fire-new tasks.
We designed a high-quality, diversified, and unified multimodal instruction-following dataset for RSI understanding.
The empirical results show that the fine-tuned MLLMs by RS-GPT4V can describe fine-grained information.
arXiv Detail & Related papers (2024-06-18T10:34:28Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Enhancing Automatic Modulation Recognition through Robust Global Feature
Extraction [12.868218616042292]
Modulated signals exhibit long temporal dependencies.
Human experts analyze patterns in constellation diagrams to classify modulation schemes.
Classical convolutional-based networks excel at extracting local features but struggle to capture global relationships.
arXiv Detail & Related papers (2024-01-02T06:31:24Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Multimodal Meta-Learning for Time Series Regression [3.135152720206844]
We will explore the idea of using meta-learning for quickly adapting model parameters to new short-history time series.
We show empirically that our proposed meta-learning method learns TSR with few data fast and outperforms the baselines in 9 of 12 experiments.
arXiv Detail & Related papers (2021-08-05T20:50:18Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Demystifying Deep Learning in Predictive Spatio-Temporal Analytics: An
Information-Theoretic Framework [20.28063653485698]
We provide a comprehensive framework for deep learning model design and information-theoretic analysis.
First, we develop and demonstrate a novel interactively-connected deep recurrent neural network (I$2$DRNN) model.
Second, to theoretically prove that our designed model can learn multi-scale-temporal dependency in PSTA tasks, we provide an information-theoretic analysis.
arXiv Detail & Related papers (2020-09-14T10:05:14Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Crowd Counting via Hierarchical Scale Recalibration Network [61.09833400167511]
We propose a novel Hierarchical Scale Recalibration Network (HSRNet) to tackle the task of crowd counting.
HSRNet models rich contextual dependencies and recalibrating multiple scale-associated information.
Our approach can ignore various noises selectively and focus on appropriate crowd scales automatically.
arXiv Detail & Related papers (2020-03-07T10:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.