NFL: Robust Learned Index via Distribution Transformation
- URL: http://arxiv.org/abs/2205.11807v1
- Date: Tue, 24 May 2022 06:03:19 GMT
- Title: NFL: Robust Learned Index via Distribution Transformation
- Authors: Shangyu Wu, Yufei Cui, Jinghuan Yu, Xuan Sun, Tei-Wei Kuo, Chun Jason
Xue
- Abstract summary: This paper tackles the approximation problem by applying a textit distribution transformation to the keys before constructing the learned index.
A two-stage Normalizing-Flow-based Learned index framework (NFL) is proposed, which first transforms the original complex key distribution into a near-uniform distribution, then builds a learned index leveraging the transformed keys.
Based on the characteristics of the transformed keys, we propose a robust After-Flow Learned Index (AFLI)
- Score: 14.812854942243503
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent works on learned index open a new direction for the indexing field.
The key insight of the learned index is to approximate the mapping between keys
and positions with piece-wise linear functions. Such methods require
partitioning key space for a better approximation. Although lots of heuristics
are proposed to improve the approximation quality, the bottleneck is that the
segmentation overheads could hinder the overall performance. This paper tackles
the approximation problem by applying a \textit{distribution transformation} to
the keys before constructing the learned index. A two-stage
Normalizing-Flow-based Learned index framework (NFL) is proposed, which first
transforms the original complex key distribution into a near-uniform
distribution, then builds a learned index leveraging the transformed keys. For
effective distribution transformation, we propose a Numerical Normalizing Flow
(Numerical NF). Based on the characteristics of the transformed keys, we
propose a robust After-Flow Learned Index (AFLI). To validate the performance,
comprehensive evaluations are conducted on both synthetic and real-world
workloads, which shows that the proposed NFL produces the highest throughput
and the lowest tail latency compared to the state-of-the-art learned indexes.
Related papers
- Squeezed Attention: Accelerating Long Context Length LLM Inference [64.11145320159126]
We propose Squeezed Attention as a mechanism to accelerate LLM applications where a large portion of the input prompt is fixed.
We use K-means clustering offline to group the keys for the fixed context based on semantic similarity and represent each cluster with a single centroid value.
We then compute exact attention using only these important keys from the fixed context, thereby reducing bandwidth and computational costs.
arXiv Detail & Related papers (2024-11-14T18:54:19Z) - Correlation and Navigation in the Vocabulary Key Representation Space of Language Models [33.747872934103334]
We study the effect of the key distribution on the NTP distribution.
We show that in the NTP distribution, the few top-ranked tokens are typically accurate.
We extend our method to open-ended and chain-of-thought (for reasoning) generation.
arXiv Detail & Related papers (2024-10-03T08:07:55Z) - Semi-Parametric Retrieval via Binary Token Index [71.78109794895065]
Semi-parametric Vocabulary Disentangled Retrieval (SVDR) is a novel semi-parametric retrieval framework.
It supports two types of indexes: an embedding-based index for high effectiveness, akin to existing neural retrieval methods; and a binary token index that allows for quick and cost-effective setup, resembling traditional term-based retrieval.
It achieves a 3% higher top-1 retrieval accuracy compared to the dense retriever DPR when using an embedding-based index and a 9% higher top-1 accuracy compared to BM25 when using a binary token index.
arXiv Detail & Related papers (2024-05-03T08:34:13Z) - Accelerating String-Key Learned Index Structures via Memoization-based Incremental Training [16.93830041971135]
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes.
They require frequent retrainings of their models to incorporate the changes introduced by update queries.
We develop an algorithm- hardware co-designed string-key learned index system, dubbed SIA.
arXiv Detail & Related papers (2024-03-18T04:44:00Z) - Compact Neural Graphics Primitives with Learned Hash Probing [100.07267906666293]
We show that a hash table with learned probes has neither disadvantage, resulting in a favorable combination of size and speed.
Inference is faster than unprobed hash tables at equal quality while training is only 1.2-2.6x slower.
arXiv Detail & Related papers (2023-12-28T18:58:45Z) - Bridging the Domain Gaps in Context Representations for k-Nearest
Neighbor Neural Machine Translation [57.49095610777317]
$k$-Nearest neighbor machine translation ($k$NN-MT) has attracted increasing attention due to its ability to non-parametrically adapt to new translation domains.
We propose a novel approach to boost the datastore retrieval of $k$NN-MT by reconstructing the original datastore.
Our method can effectively boost the datastore retrieval and translation quality of $k$NN-MT.
arXiv Detail & Related papers (2023-05-26T03:04:42Z) - Graph Positional Encoding via Random Feature Propagation [39.84324765957645]
Two main families of node feature augmentation schemes have been explored for enhancing GNNs.
We propose a novel family of positional encoding schemes which draws a link between the above two approaches.
We empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.
arXiv Detail & Related papers (2023-03-06T06:28:20Z) - A Learned Index for Exact Similarity Search in Metric Spaces [25.330353637669386]
LIMS is proposed to use data clustering and pivot-based data transformation techniques to build learned indexes.
Machine learning models are developed to approximate the position of each data record on the disk.
Extensive experiments on real-world and synthetic datasets demonstrate the superiority of LIMS compared with traditional indexes.
arXiv Detail & Related papers (2022-04-21T11:24:55Z) - Accurate Grid Keypoint Learning for Efficient Video Prediction [87.71109421608232]
Keypoint-based video prediction methods can consume substantial computing resources in training and deployment.
In this paper, we design a new grid keypoint learning framework, aiming at a robust and explainable intermediate keypoint representation for long-term efficient video prediction.
Our method outperforms the state-ofthe-art video prediction methods while saves 98% more than computing resources.
arXiv Detail & Related papers (2021-07-28T05:04:30Z) - COAX: Correlation-Aware Indexing on Multidimensional Data with Soft
Functional Dependencies [3.670422696827525]
We present COAX, a learned index for multidimensional data that learns the correlations between attributes of the dataset.
We show experimentally that by predicting correlated attributes in the data, we can improve the query execution time and reduce the memory overhead of the index.
arXiv Detail & Related papers (2020-06-29T21:22:15Z) - Progressively Pretrained Dense Corpus Index for Open-Domain Question
Answering [87.32442219333046]
We propose a simple and resource-efficient method to pretrain the paragraph encoder.
Our method outperforms an existing dense retrieval method that uses 7 times more computational resources for pretraining.
arXiv Detail & Related papers (2020-04-30T18:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.