word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector
Embeddings of Structured Data
- URL: http://arxiv.org/abs/2003.12590v1
- Date: Fri, 27 Mar 2020 18:23:55 GMT
- Title: word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector
Embeddings of Structured Data
- Authors: Martin Grohe
- Abstract summary: We propose two theoretical approaches for understanding the foundations of vector embeddings.
We draw connections between the various approaches and suggest directions for future research.
- Score: 2.63067287928779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector representations of graphs and relational structures, whether
hand-crafted feature vectors or learned representations, enable us to apply
standard data analysis and machine learning techniques to the structures. A
wide range of methods for generating such embeddings have been studied in the
machine learning and knowledge representation literature. However, vector
embeddings have received relatively little attention from a theoretical point
of view.
Starting with a survey of embedding techniques that have been used in
practice, in this paper we propose two theoretical approaches that we see as
central for understanding the foundations of vector embeddings. We draw
connections between the various approaches and suggest directions for future
research.
Related papers
- Dissecting embedding method: learning higher-order structures from data [0.0]
Geometric deep learning methods for data learning often include set of assumptions on the geometry of the feature space.
These assumptions together with data being discrete and finite can cause some generalisations, which are likely to create wrong interpretations of the data and models outputs.
arXiv Detail & Related papers (2024-10-14T08:19:39Z) - Rule-Guided Joint Embedding Learning over Knowledge Graphs [6.831227021234669]
This paper introduces a novel model that incorporates both contextual and literal information into entity and relation embeddings.
For contextual information, we assess its significance through confidence and relatedness metrics.
We validate our model performance with thorough experiments on two established benchmark datasets.
arXiv Detail & Related papers (2023-12-01T19:58:31Z) - From axioms over graphs to vectors, and back again: evaluating the
properties of graph-based ontology embeddings [78.217418197549]
One approach to generating embeddings is by introducing a set of nodes and edges for named entities and logical axioms structure.
Methods that embed in graphs (graph projections) have different properties related to the type of axioms they utilize.
arXiv Detail & Related papers (2023-03-29T08:21:49Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - AttrE2vec: Unsupervised Attributed Edge Representation Learning [22.774159996012276]
This paper proposes a novel unsupervised inductive method called AttrE2Vec, which learns a low-dimensional vector representation for edges in attributed networks.
Experimental results show that, compared to contemporary approaches, our method builds more powerful edge vector representations.
arXiv Detail & Related papers (2020-12-29T12:20:49Z) - Quiver Signal Processing (QSP) [145.6921439353007]
We state the basics for a signal processing framework on quiver representations.
We propose a signal processing framework that allows us to handle heterogeneous multidimensional information in networks.
arXiv Detail & Related papers (2020-10-22T08:40:15Z) - Semi-supervised Learning by Latent Space Energy-Based Model of
Symbol-Vector Coupling [55.866810975092115]
We propose a latent space energy-based prior model for semi-supervised learning.
We show that our method performs well on semi-supervised learning tasks.
arXiv Detail & Related papers (2020-10-19T09:55:14Z) - Hierarchical and Unsupervised Graph Representation Learning with
Loukas's Coarsening [9.12816196758482]
We propose a novel for unsupervised graph representation learning with attributed graphs.
We show that our algorithm is competitive with state of the art among unsupervised representation learning methods.
arXiv Detail & Related papers (2020-07-07T12:04:38Z) - Machine learning-based classification of vector vortex beams [48.7576911714538]
We show a new, flexible experimental approach to the classification of vortex vector beams.
We first describe a platform for generating arbitrary complex vector vortex beams inspired to photonic quantum walks.
We then exploit recent machine learning methods to recognize and classify specific polarization patterns.
arXiv Detail & Related papers (2020-05-16T10:58:49Z) - Analyzing Knowledge Graph Embedding Methods from a Multi-Embedding
Interaction Perspective [3.718476964451589]
Real-world knowledge graphs are usually incomplete, so knowledge graph embedding methods have been proposed to address this issue.
These methods represent entities and relations as embedding vectors in semantic space and predict the links between them.
We propose a new multi-embedding model based on quaternion algebra and show that it achieves promising results using popular benchmarks.
arXiv Detail & Related papers (2019-03-27T13:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.