SKGE: Spherical Knowledge Graph Embedding with Geometric Regularization
- URL: http://arxiv.org/abs/2511.02460v1
- Date: Tue, 04 Nov 2025 10:40:46 GMT
- Title: SKGE: Spherical Knowledge Graph Embedding with Geometric Regularization
- Authors: Xuan-Truong Quan, Xuan-Son Quan, Duc Do Minh, Vinh Nguyen Van,
- Abstract summary: We propose Spherical Knowledge Graph Embedding (SKGE), a model that constrains entity representations to a compact manifold: a hypersphere.<n>We demonstrate that SKGE consistently and significantly outperforms its strong Euclidean counterpart, TransE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph embedding (KGE) has become a fundamental technique for representation learning on multi-relational data. Many seminal models, such as TransE, operate in an unbounded Euclidean space, which presents inherent limitations in modeling complex relations and can lead to inefficient training. In this paper, we propose Spherical Knowledge Graph Embedding (SKGE), a model that challenges this paradigm by constraining entity representations to a compact manifold: a hypersphere. SKGE employs a learnable, non-linear Spherization Layer to map entities onto the sphere and interprets relations as a hybrid translate-then-project transformation. Through extensive experiments on three benchmark datasets, FB15k-237, CoDEx-S, and CoDEx-M, we demonstrate that SKGE consistently and significantly outperforms its strong Euclidean counterpart, TransE, particularly on large-scale benchmarks such as FB15k-237 and CoDEx-M, demonstrating the efficacy of the spherical geometric prior. We provide an in-depth analysis to reveal the sources of this advantage, showing that this geometric constraint acts as a powerful regularizer, leading to comprehensive performance gains across all relation types. More fundamentally, we prove that the spherical geometry creates an "inherently hard negative sampling" environment, naturally eliminating trivial negatives and forcing the model to learn more robust and semantically coherent representations. Our findings compellingly demonstrate that the choice of manifold is not merely an implementation detail but a fundamental design principle, advocating for geometric priors as a cornerstone for designing the next generation of powerful and stable KGE models.
Related papers
- Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization [8.201374511929538]
This paper proposes a novel paradigm for machine learning that moves beyond traditional parameter optimization.<n>We optimize the metric tensor field on a manifold with a predefined topology, thereby dynamically shaping the geometric structure of the model space.<n>This work lays a solid foundation for constructing fully dynamic "meta-learners" capable of autonomously evolving their geometry and topology.
arXiv Detail & Related papers (2025-10-30T01:53:32Z) - Follow the Energy, Find the Path: Riemannian Metrics from Energy-Based Models [63.331590876872944]
We propose a method for deriving Riemannian metrics directly from pretrained Energy-Based Models.<n>These metrics define spatially varying distances, enabling the computation of geodesics.<n>We show that EBM-derived metrics consistently outperform established baselines.
arXiv Detail & Related papers (2025-05-23T12:18:08Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution [8.107199775668942]
Integrated Gradients (IG) is a prevalent feature attribution method for black-box deep learning models.
We address two predominant challenges associated with IG: the generation of noisy feature visualizations and the vulnerability to adversarial attributional attacks.
Our approach involves an adaptation of path-based feature attribution, aligning the path of attribution more closely to the intrinsic geometry of the data manifold.
arXiv Detail & Related papers (2024-05-16T04:13:17Z) - On the Completeness of Invariant Geometric Deep Learning Models [22.43250261702209]
Invariant models generate meaningful geometric representations by leveraging informative geometric features in point clouds.<n>We characterize the theoretical expressiveness of a wide range of invariant models under fully-connected conditions.<n>Our theoretical results fill the gap in the expressive power of invariant models, contributing to a rigorous and comprehensive understanding of their capabilities.
arXiv Detail & Related papers (2024-02-07T13:32:53Z) - Block-Diagonal Orthogonal Relation and Matrix Entity for Knowledge Graph Embedding [5.463034010805521]
A Knowledge Graph embeddings (KGE) is to learn low-dimensional representations of entities and relations for predicting missing facts.
We introduce OrthogonalE, a novel KGE model employing matrices for entities and block-diagonal matrices for relations.
The experimental results indicate that our new KGE model, OrthogonalE, is both general and flexible, significantly outperforming state-of-the-art KGE models.
arXiv Detail & Related papers (2024-01-11T15:13:00Z) - A Comprehensive Study on Knowledge Graph Embedding over Relational
Patterns Based on Rule Learning [49.09125100268454]
Knowledge Graph Embedding (KGE) has proven to be an effective approach to solving the Knowledge Completion Graph (KGC) task.
Relational patterns are an important factor in the performance of KGE models.
We introduce a training-free method to enhance KGE models' performance over various relational patterns.
arXiv Detail & Related papers (2023-08-15T17:30:57Z) - Geometry Interaction Knowledge Graph Embeddings [153.69745042757066]
We propose Geometry Interaction knowledge graph Embeddings (GIE), which learns spatial structures interactively between the Euclidean, hyperbolic and hyperspherical spaces.
Our proposed GIE can capture a richer set of relational information, model key inference patterns, and enable expressive semantic matching across entities.
arXiv Detail & Related papers (2022-06-24T08:33:43Z) - ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs [73.86041481470261]
Cone Embeddings (ConE) is the first geometry-based query embedding model that can handle conjunction, disjunction, and negation.
ConE significantly outperforms existing state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2021-10-26T14:04:02Z) - Self-supervised Geometric Perception [96.89966337518854]
Self-supervised geometric perception is a framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels.
We show that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.
arXiv Detail & Related papers (2021-03-04T15:34:43Z) - Motif Learning in Knowledge Graphs Using Trajectories Of Differential
Equations [14.279419014064047]
Knowledge Graph Embeddings (KGEs) have shown promising performance on link prediction tasks.
Many KGEs use the flat geometry which renders them incapable of preserving complex structures.
We propose a neuro differential KGE that embeds nodes of a KG on the trajectories of Ordinary Differential Equations (ODEs)
arXiv Detail & Related papers (2020-10-13T20:53:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.