T3DM: Test-Time Training-Guided Distribution Shift Modelling for Temporal Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2507.01597v1
- Date: Wed, 02 Jul 2025 11:02:37 GMT
- Title: T3DM: Test-Time Training-Guided Distribution Shift Modelling for Temporal Knowledge Graph Reasoning
- Authors: Yuehang Si, Zefan Zeng, Jincai Huang, Qing Cheng,
- Abstract summary: Temporal Knowledge Graph (TKG) is an efficient method for describing the dynamic development of facts along a timeline.<n>We propose a novel distributional feature modeling approach for training TKGR models, Test-Time Training-guided Distribution shift Modelling (T3DM)<n>In addition, we design a negative-sampling strategy to generate higher-quality negative quadruples based on adversarial training.
- Score: 3.2186308082558632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Knowledge Graph (TKG) is an efficient method for describing the dynamic development of facts along a timeline. Most research on TKG reasoning (TKGR) focuses on modelling the repetition of global facts and designing patterns of local historical facts. However, they face two significant challenges: inadequate modeling of the event distribution shift between training and test samples, and reliance on random entity substitution for generating negative samples, which often results in low-quality sampling. To this end, we propose a novel distributional feature modeling approach for training TKGR models, Test-Time Training-guided Distribution shift Modelling (T3DM), to adjust the model based on distribution shift and ensure the global consistency of model reasoning. In addition, we design a negative-sampling strategy to generate higher-quality negative quadruples based on adversarial training. Extensive experiments show that T3DM provides better and more robust results than the state-of-the-art baselines in most cases.
Related papers
- Evaluation of Seismic Artificial Intelligence with Uncertainty [0.0]
We develop an evaluation framework for evaluating and comparing deep learning models (DLMs)<n>Our framework helps practitioners choose the best model for their problem and set performance expectations.
arXiv Detail & Related papers (2025-01-15T16:45:51Z) - DRDT3: Diffusion-Refined Decision Test-Time Training Model [6.907105812732423]
Decision Transformer (DT) has shown competitive performance compared to traditional offline reinforcement learning (RL) approaches.<n>We introduce a unified framework, called Diffusion-Refined Decision TTT (DRDT3), to achieve performance beyond DT models.
arXiv Detail & Related papers (2025-01-12T04:59:49Z) - Towards Pattern-aware Data Augmentation for Temporal Knowledge Graph Completion [18.51546761241817]
We introduce Booster, the first data augmentation strategy for temporal knowledge graphs.<n>We propose a hierarchical scoring algorithm based on triadic closures within TKGs.<n>We also propose a two-stage training approach to identify samples that deviate from the model's preferred patterns.
arXiv Detail & Related papers (2024-12-31T03:47:19Z) - Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.<n>We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.<n>Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Refining 3D Point Cloud Normal Estimation via Sample Selection [13.207964615561261]
We introduce a fundamental framework for normal estimation, enhancing existing model through the incorporation of global information and various constraint mechanisms.
We also utilize existing orientation methods to correct estimated non-oriented normals, achieving state-of-the-art performance in both oriented and non-oriented tasks.
arXiv Detail & Related papers (2024-05-20T02:06:10Z) - TEA: Test-time Energy Adaptation [67.4574269851666]
Test-time adaptation (TTA) aims to improve model generalizability when test data diverges from training distribution.
We propose a novel energy-based perspective, enhancing the model's perception of target data distributions.
arXiv Detail & Related papers (2023-11-24T10:49:49Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment [32.752633250862694]
Generative foundation models are susceptible to implicit biases that can arise from extensive unsupervised training data.
We introduce a new framework, Reward rAnked FineTuning, designed to align generative models effectively.
arXiv Detail & Related papers (2023-04-13T18:22:40Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.