NodePiece: Compositional and Parameter-Efficient Representations of
Large Knowledge Graphs
- URL: http://arxiv.org/abs/2106.12144v1
- Date: Wed, 23 Jun 2021 03:51:03 GMT
- Title: NodePiece: Compositional and Parameter-Efficient Representations of
Large Knowledge Graphs
- Authors: Mikhail Galkin, Jiapeng Wu, Etienne Denis, William L. Hamilton
- Abstract summary: We propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.
In NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types.
Experiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks.
- Score: 15.289356276538662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional representation learning algorithms for knowledge graphs (KG) map
each entity to a unique embedding vector. Such a shallow lookup results in a
linear growth of memory consumption for storing the embedding matrix and incurs
high computational costs when working with real-world KGs. Drawing parallels
with subword tokenization commonly used in NLP, we explore the landscape of
more parameter-efficient node embedding strategies with possibly sublinear
memory requirements. To this end, we propose NodePiece, an anchor-based
approach to learn a fixed-size entity vocabulary. In NodePiece, a vocabulary of
subword/sub-entity units is constructed from anchor nodes in a graph with known
relation types. Given such a fixed-size vocabulary, it is possible to bootstrap
an encoding and embedding for any entity, including those unseen during
training. Experiments show that NodePiece performs competitively in node
classification, link prediction, and relation prediction tasks while retaining
less than 10% of explicit nodes in a graph as anchors and often having 10x
fewer parameters.
Related papers
- You do not have to train Graph Neural Networks at all on text-attributed graphs [25.044734252779975]
We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace.
Our experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts.
arXiv Detail & Related papers (2024-04-17T02:52:11Z) - EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Embedding Compression with Hashing for Efficient Representation Learning
in Large-Scale Graph [21.564894767364397]
Graph neural networks (GNNs) are deep learning models designed specifically for graph data.
We develop a node embedding compression method where each node is compactly represented with a bit vector instead of a floating-point vector.
We show that the proposed node embedding compression method achieves superior performance compared to the alternatives.
arXiv Detail & Related papers (2022-08-11T05:43:39Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - COLOGNE: Coordinated Local Graph Neighborhood Sampling [1.6498361958317633]
replacing discrete unordered objects such as graph nodes by real-valued vectors is at the heart of many approaches to learning from graph data.
We address the problem of learning discrete node embeddings such that the coordinates of the node vector representations are graph nodes.
This opens the door to designing interpretable machine learning algorithms for graphs as all attributes originally present in the nodes are preserved.
arXiv Detail & Related papers (2021-02-09T11:39:06Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z) - Gossip and Attend: Context-Sensitive Graph Representation Learning [0.5493410630077189]
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs.
We propose GOAT, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph.
arXiv Detail & Related papers (2020-03-30T18:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.