Embedding Knowledge Graph in Function Spaces
- URL: http://arxiv.org/abs/2409.14857v2
- Date: Tue, 24 Sep 2024 09:33:44 GMT
- Title: Embedding Knowledge Graph in Function Spaces
- Authors: Louis Mozart Kamdem Teyou, Caglar Demir, Axel-Cyrille Ngonga Ngomo,
- Abstract summary: We introduce a novel embedding method diverging from conventional approaches by operating within function spaces of finite dimension rather than vector space.
We argue that employing functions for embedding allows for more degrees of freedom, enabling operations such as composition, derivatives, primitive entities representation.
- Score: 1.90894751866253
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a novel embedding method diverging from conventional approaches by operating within function spaces of finite dimension rather than finite vector space, thus departing significantly from standard knowledge graph embedding techniques. Initially employing polynomial functions to compute embeddings, we progress to more intricate representations using neural networks with varying layer complexities. We argue that employing functions for embedding computation enhances expressiveness and allows for more degrees of freedom, enabling operations such as composition, derivatives and primitive of entities representation. Additionally, we meticulously outline the step-by-step construction of our approach and provide code for reproducibility, thereby facilitating further exploration and application in the field.
Related papers
- MultiSTOP: Solving Functional Equations with Reinforcement Learning [56.073581097785016]
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.
This new methodology produces actual numerical solutions instead of bounds on them.
arXiv Detail & Related papers (2024-04-23T10:51:31Z) - Neural Feature Learning in Function Space [5.807950618412389]
We present a novel framework for learning system design with neural feature extractors.
We introduce the feature geometry, which unifies statistical dependence and feature representations in a function space equipped with inner products.
We propose a nesting technique, which provides systematic algorithm designs for learning the optimal features from data samples.
arXiv Detail & Related papers (2023-09-18T20:39:12Z) - Unification of popular artificial neural network activation functions [0.0]
We present a unified representation of the most popular neural network activation functions.
Adopting Mittag-Leffler functions of fractional calculus, we propose a flexible and compact functional form.
arXiv Detail & Related papers (2023-02-21T21:20:59Z) - Learning Functional Transduction [9.926231893220063]
We show that transductive regression principles can be meta-learned through gradient descent to form efficient in-context neural approximators.
We demonstrate the benefit of our meta-learned transductive approach to model complex physical systems influenced by varying external factors with little data.
arXiv Detail & Related papers (2023-02-01T09:14:28Z) - Neural Set Function Extensions: Learning with Discrete Functions in High
Dimensions [63.21838830509772]
We develop a framework for extending set functions onto low-dimensional continuous domains.
Our framework subsumes many well-known extensions as special cases.
We convert low-dimensional neural network bottlenecks into representations in high-dimensional spaces.
arXiv Detail & Related papers (2022-08-08T10:58:02Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Approximate Decomposable Submodular Function Minimization for
Cardinality-Based Components [30.33731479053404]
Minimizing a sum of simple submodular functions of limited support has numerous applications in machine learning.
We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set.
We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem.
arXiv Detail & Related papers (2021-10-28T02:36:55Z) - Compressing Deep ODE-Nets using Basis Function Expansions [105.05435207079759]
We consider formulations of the weights as continuous-depth functions using linear combinations of basis functions.
This perspective allows us to compress the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance.
In turn, both inference time and the memory footprint are reduced, enabling quick and rigorous adaptation between computational environments.
arXiv Detail & Related papers (2021-06-21T03:04:51Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.