Rethinking Regularization Methods for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2505.23442v1
- Date: Thu, 29 May 2025 13:39:18 GMT
- Title: Rethinking Regularization Methods for Knowledge Graph Completion
- Authors: Linyu Li, Zhi Jin, Yuanpeng He, Dongming Jin, Haoran Duan, Zhengwei Tao, Xuan Zhang, Jiandong Li,
- Abstract summary: We introduce a novel sparse-regularization method that embeds the concept of rank-based selective sparsity into the KGC regularizer.<n>Various experiments on multiple datasets and multiple models show that the SPR regularization method is better than other regularization methods and can enable the KGC model to further break through the performance margin.
- Score: 25.269091177345565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion (KGC) has attracted considerable attention in recent years because it is critical to improving the quality of knowledge graphs. Researchers have continuously explored various models. However, most previous efforts have neglected to take advantage of regularization from a deeper perspective and therefore have not been used to their full potential. This paper rethinks the application of regularization methods in KGC. Through extensive empirical studies on various KGC models, we find that carefully designed regularization not only alleviates overfitting and reduces variance but also enables these models to break through the upper bounds of their original performance. Furthermore, we introduce a novel sparse-regularization method that embeds the concept of rank-based selective sparsity into the KGC regularizer. The core idea is to selectively penalize those components with significant features in the embedding vector, thus effectively ignoring many components that contribute little and may only represent noise. Various comparative experiments on multiple datasets and multiple models show that the SPR regularization method is better than other regularization methods and can enable the KGC model to further break through the performance margin.
Related papers
- Normalized Attention Guidance: Universal Negative Guidance for Diffusion Model [57.20761595019967]
We present Normalized Attention Guidance (NAG), an efficient, training-free mechanism that applies extrapolation in attention space with L1-based normalization and refinement.<n>NAG restores effective negative guidance where CFG collapses while maintaining fidelity.<n>NAG generalizes across architectures (UNet, DiT), sampling regimes (few-step, multi-step), and modalities (image, video)
arXiv Detail & Related papers (2025-05-27T13:30:46Z) - Revitalizing Reconstruction Models for Multi-class Anomaly Detection via Class-Aware Contrastive Learning [19.114941437668705]
We propose a plug-and-play modification by incorporating class-aware contrastive learning (CL)<n> Experiments across four datasets verify the effectiveness of our approach, yielding significant improvements and superior performance compared to advanced methods.
arXiv Detail & Related papers (2024-12-06T04:31:09Z) - Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition [10.441880303257468]
Class imbalance problems frequently occur in real-world tasks.
To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples.
These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models.
Several methods have been developed that increase the representations of minority samples by the features of the majority samples.
arXiv Detail & Related papers (2023-02-10T20:30:22Z) - Disentangling the Mechanisms Behind Implicit Regularization in SGD [21.893397581060636]
This paper focuses on the ability of various theorized mechanisms to close the small-to-large batch generalization gap.
We show that explicitly penalizing the gradient norm or the Fisher Information Matrix trace, averaged over micro-batches, in the large-batch regime recovers small-batch SGD generalization.
This generalization performance is shown to often be correlated with how well the regularized model's gradient norms resemble those of small-batch SGD.
arXiv Detail & Related papers (2022-11-29T01:05:04Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Contrastive Learning for Debiased Candidate Generation in Large-Scale
Recommender Systems [84.3996727203154]
We show that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity weighting.
We further improve upon CLRec and propose Multi-CLRec, for accurate multi-intention aware bias reduction.
Our methods have been successfully deployed in Taobao, where at least four-month online A/B tests and offline analyses demonstrate its substantial improvements.
arXiv Detail & Related papers (2020-05-20T08:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.