Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
- URL: http://arxiv.org/abs/2404.10924v1
- Date: Tue, 16 Apr 2024 21:52:55 GMT
- Title: Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
- Authors: Croix Gyurek, Niloy Talukder, Mohammad Al Hasan,
- Abstract summary: We propose Binder, a novel approach for order-based representation.
Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods.
- Score: 3.9271338080639753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For natural language understanding and generation, embedding concepts using an order-based representation is an essential task. Unlike traditional point vector based representation, an order-based representation imposes geometric constraints on the representation vectors for explicitly capturing various semantic relationships that may exist between a pair of concepts. In existing literature, several approaches on order-based embedding have been proposed, mostly focusing on capturing hierarchical relationships; examples include vectors in Euclidean space, complex, Hyperbolic, order, and Box Embedding. Box embedding creates region-based rich representation of concepts, but along the process it sacrifices simplicity, requiring a custom-made optimization scheme for learning the representation. Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space. In this work, we propose Binder, a novel approach for order-based representation. Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods. Binder uses a simple and efficient optimization scheme for learning representation vectors with a linear time complexity. Our comprehensive experimental results show that Binder is very accurate, yielding competitive results on the representation task. But Binder stands out from its competitors on the transitive closure link prediction task as it can learn concept embeddings just from the direct edges, whereas all existing order-based approaches rely on the indirect edges.
Related papers
- A Geometric Approach to Personalized Recommendation with Set-Theoretic Constraints Using Box Embeddings [43.609405236093025]
We formulate the problem of personalized item recommendation as matrix completion where rows are set-theoretically dependent.
Box embeddings can intuitively be understood as trainable Venn diagrams.
We empirically demonstrate the superiority of box embeddings over vector-based neural methods on both simple and complex item recommendation queries by up to 30 % overall.
arXiv Detail & Related papers (2025-02-15T18:18:00Z) - Learning Structured Representations with Hyperbolic Embeddings [22.95613852886361]
We propose HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations.
Experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion.
For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance.
arXiv Detail & Related papers (2024-12-02T00:56:44Z) - AdaContour: Adaptive Contour Descriptor with Hierarchical Representation [52.381359663689004]
Existing angle-based contour descriptors suffer from lossy representation for non-star shapes.
AdaCon is able to represent shapes more accurately robustly than other descriptors.
arXiv Detail & Related papers (2024-04-12T07:30:24Z) - Learning Vector-Quantized Item Representation for Transferable
Sequential Recommenders [33.406897794088515]
VQ-Rec is a novel approach to learning Vector-Quantized item representations for transferable sequential Recommender.
We propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.
arXiv Detail & Related papers (2022-10-22T00:43:14Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Temporally-Consistent Surface Reconstruction using Metrically-Consistent
Atlases [131.50372468579067]
We propose a method for unsupervised reconstruction of a temporally-consistent sequence of surfaces from a sequence of time-evolving point clouds.
We represent the reconstructed surfaces as atlases computed by a neural network, which enables us to establish correspondences between frames.
Our approach outperforms state-of-the-art ones on several challenging datasets.
arXiv Detail & Related papers (2021-11-12T17:48:25Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - Variable Binding for Sparse Distributed Representations: Theory and
Applications [4.150085009901543]
Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap.
VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population.
We show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality.
arXiv Detail & Related papers (2020-09-14T20:40:09Z) - Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies [60.285091454321055]
We design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix.
On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes.
arXiv Detail & Related papers (2020-03-18T13:07:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.