SOLAR: Sparse Orthogonal Learned and Random Embeddings
- URL: http://arxiv.org/abs/2008.13225v1
- Date: Sun, 30 Aug 2020 17:35:35 GMT
- Title: SOLAR: Sparse Orthogonal Learned and Random Embeddings
- Authors: Tharun Medini, Beidi Chen, Anshumali Shrivastava
- Abstract summary: We argue that high-dimensional and ultra-sparse embedding is a significantly superior alternative to dense low-dimensional embedding for both query efficiency and accuracy.
We train 500K dimensional SOLAR embeddings for the tasks of searching through 1.6M books and multi-label classification on the three largest public datasets.
We achieve superior precision and recall compared to the respective state-of-the-art baselines for each of the tasks with up to 10 times faster speed.
- Score: 45.920844071257754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense embedding models are commonly deployed in commercial search engines,
wherein all the document vectors are pre-computed, and near-neighbor search
(NNS) is performed with the query vector to find relevant documents. However,
the bottleneck of indexing a large number of dense vectors and performing an
NNS hurts the query time and accuracy of these models. In this paper, we argue
that high-dimensional and ultra-sparse embedding is a significantly superior
alternative to dense low-dimensional embedding for both query efficiency and
accuracy. Extreme sparsity eliminates the need for NNS by replacing them with
simple lookups, while its high dimensionality ensures that the embeddings are
informative even when sparse. However, learning extremely high dimensional
embeddings leads to blow up in the model size. To make the training feasible,
we propose a partitioning algorithm that learns such high dimensional
embeddings across multiple GPUs without any communication. This is facilitated
by our novel asymmetric mixture of Sparse, Orthogonal, Learned and Random
(SOLAR) Embeddings. The label vectors are random, sparse, and near-orthogonal
by design, while the query vectors are learned and sparse. We theoretically
prove that our way of one-sided learning is equivalent to learning both query
and label embeddings. With these unique properties, we can successfully train
500K dimensional SOLAR embeddings for the tasks of searching through 1.6M books
and multi-label classification on the three largest public datasets. We achieve
superior precision and recall compared to the respective state-of-the-art
baselines for each of the tasks with up to 10 times faster speed.
Related papers
- GleanVec: Accelerating vector search with minimalist nonlinear dimensionality reduction [1.1599570446840546]
Cross-modal retrieval (e.g., where a text query is used to find images) is gaining momentum rapidly.
It is challenging to achieve high accuracy as the queries often have different statistical distributions than the database vectors.
We present new linear and nonlinear methods for dimensionality reduction to accelerate high-dimensional vector search.
arXiv Detail & Related papers (2024-10-14T21:14:27Z) - The Impacts of Data, Ordering, and Intrinsic Dimensionality on Recall in Hierarchical Navigable Small Worlds [0.09208007322096533]
Investigation focuses on HNSW's efficacy across a spectrum of datasets.
We discover that the recall of approximate HNSW search, in comparison to exact K Nearest Neighbours (KNN) search, is linked to the vector space's intrinsic dimensionality.
We observe that running popular benchmark datasets with HNSW instead of KNN can shift rankings by up to three positions for some models.
arXiv Detail & Related papers (2024-05-28T04:16:43Z) - Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control [66.78146440275093]
Learned retrieval (LSR) is a family of neural methods that encode queries and documents into sparse lexical vectors.
We explore the application of LSR to the multi-modal domain, with a focus on text-image retrieval.
Current approaches like LexLIP and STAIR require complex multi-step training on massive datasets.
Our proposed approach efficiently transforms dense vectors from a frozen dense model into sparse lexical vectors.
arXiv Detail & Related papers (2024-02-27T14:21:56Z) - LeanVec: Searching vectors faster by making them fit [1.0863382547662974]
We present LeanVec, a framework that combines linear dimensionality reduction with vector quantization to accelerate similarity search on high-dimensional vectors.
We show that LeanVec produces state-of-the-art results, with up to 3.7x improvement in search throughput and up to 4.9x faster index build time.
arXiv Detail & Related papers (2023-12-26T21:14:59Z) - Fully Sparse Fusion for 3D Object Detection [69.32694845027927]
Currently prevalent multimodal 3D detection methods are built upon LiDAR-based detectors that usually use dense Bird's-Eye-View feature maps.
Fully sparse architecture is gaining attention as they are highly efficient in long-range perception.
In this paper, we study how to effectively leverage image modality in the emerging fully sparse architecture.
arXiv Detail & Related papers (2023-04-24T17:57:43Z) - Focal Sparse Convolutional Networks for 3D Object Detection [121.45950754511021]
We introduce two new modules to enhance the capability of Sparse CNNs.
They are focal sparse convolution (Focals Conv) and its multi-modal variant of focal sparse convolution with fusion.
For the first time, we show that spatially learnable sparsity in sparse convolution is essential for sophisticated 3D object detection.
arXiv Detail & Related papers (2022-04-26T17:34:10Z) - Embracing Structure in Data for Billion-Scale Semantic Product Search [14.962039276966319]
We present principled approaches to train and deploy dyadic neural embedding models at the billion scale.
We show that exploiting the natural structure of real-world datasets helps address both challenges efficiently.
arXiv Detail & Related papers (2021-10-12T16:14:13Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - IRLI: Iterative Re-partitioning for Learning to Index [104.72641345738425]
Methods have to trade between obtaining high accuracy while maintaining load balance and scalability in distributed settings.
We propose a novel approach called IRLI, which iteratively partitions the items by learning the relevant buckets directly from the query-item relevance data.
We mathematically show that IRLI retrieves the correct item with high probability under very natural assumptions and provides superior load balancing.
arXiv Detail & Related papers (2021-03-17T23:13:25Z) - Learning a Compact State Representation for Navigation Tasks by
Autoencoding 2D-Lidar Scans [7.99536002595393]
We generate a compact representation of 2D-lidar scans for reinforcement learning in navigation tasks.
In particular, we incorporate the relation of consecutive scans, especially ego-motion, by applying a memory model.
Experiments show the capability of our approach to highly compress lidar data, maintain a meaningful distribution of the latent space, and even incorporate time-depended information.
arXiv Detail & Related papers (2021-02-03T16:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.