Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
- URL: http://arxiv.org/abs/2404.10924v1
- Date: Tue, 16 Apr 2024 21:52:55 GMT
- Title: Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
- Authors: Croix Gyurek, Niloy Talukder, Mohammad Al Hasan,
- Abstract summary: We propose Binder, a novel approach for order-based representation.
Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods.
- Score: 3.9271338080639753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For natural language understanding and generation, embedding concepts using an order-based representation is an essential task. Unlike traditional point vector based representation, an order-based representation imposes geometric constraints on the representation vectors for explicitly capturing various semantic relationships that may exist between a pair of concepts. In existing literature, several approaches on order-based embedding have been proposed, mostly focusing on capturing hierarchical relationships; examples include vectors in Euclidean space, complex, Hyperbolic, order, and Box Embedding. Box embedding creates region-based rich representation of concepts, but along the process it sacrifices simplicity, requiring a custom-made optimization scheme for learning the representation. Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space. In this work, we propose Binder, a novel approach for order-based representation. Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods. Binder uses a simple and efficient optimization scheme for learning representation vectors with a linear time complexity. Our comprehensive experimental results show that Binder is very accurate, yielding competitive results on the representation task. But Binder stands out from its competitors on the transitive closure link prediction task as it can learn concept embeddings just from the direct edges, whereas all existing order-based approaches rely on the indirect edges.
Related papers
- AdaContour: Adaptive Contour Descriptor with Hierarchical Representation [52.381359663689004]
Existing angle-based contour descriptors suffer from lossy representation for non-star shapes.
AdaCon is able to represent shapes more accurately robustly than other descriptors.
arXiv Detail & Related papers (2024-04-12T07:30:24Z) - Explainable Trajectory Representation through Dictionary Learning [7.567576186354494]
Trajectory representation learning on a network enhances our understanding of vehicular traffic patterns.
Existing approaches using classic machine learning or deep learning embed trajectories as dense vectors, which lack interpretability.
This paper proposes an explainable trajectory representation learning framework through dictionary learning.
arXiv Detail & Related papers (2023-12-13T10:59:54Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Learning Vector-Quantized Item Representation for Transferable
Sequential Recommenders [33.406897794088515]
VQ-Rec is a novel approach to learning Vector-Quantized item representations for transferable sequential Recommender.
We propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.
arXiv Detail & Related papers (2022-10-22T00:43:14Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - Variable Binding for Sparse Distributed Representations: Theory and
Applications [4.150085009901543]
Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap.
VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population.
We show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality.
arXiv Detail & Related papers (2020-09-14T20:40:09Z) - Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies [60.285091454321055]
We design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix.
On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes.
arXiv Detail & Related papers (2020-03-18T13:07:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.