UpLIF: An Updatable Self-Tuning Learned Index Framework
- URL: http://arxiv.org/abs/2408.04113v1
- Date: Wed, 7 Aug 2024 22:30:43 GMT
- Title: UpLIF: An Updatable Self-Tuning Learned Index Framework
- Authors: Alireza Heidari, Amirhossein Ahmadi, Wei Zhang,
- Abstract summary: UpLIF is an adaptive self-tuning learned index that adjusts the model to accommodate incoming updates.
We also introduce the concept of balanced model adjustment, which determines the model's inherent properties.
- Score: 4.077820670802213
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The emergence of learned indexes has caused a paradigm shift in our perception of indexing by considering indexes as predictive models that estimate keys' positions within a data set, resulting in notable improvements in key search efficiency and index size reduction; however, a significant challenge inherent in learned index modeling is its constrained support for update operations, necessitated by the requirement for a fixed distribution of records. Previous studies have proposed various approaches to address this issue with the drawback of high overhead due to multiple model retraining. In this paper, we present UpLIF, an adaptive self-tuning learned index that adjusts the model to accommodate incoming updates, predicts the distribution of updates for performance improvement, and optimizes its index structure using reinforcement learning. We also introduce the concept of balanced model adjustment, which determines the model's inherent properties (i.e. bias and variance), enabling the integration of these factors into the existing index model without the need for retraining with new data. Our comprehensive experiments show that the system surpasses state-of-the-art indexing solutions (both traditional and ML-based), achieving an increase in throughput of up to 3.12 times with 1000 times less memory usage.
Related papers
- A New Paradigm in Tuning Learned Indexes: A Reinforcement Learning Enhanced Approach [6.454589614577438]
This paper introduces LITune, a novel framework for end-to-end automatic tuning of Learned Index Structures.
LITune employs an adaptive training pipeline equipped with a tailor-made Deep Reinforcement Learning (DRL) approach to ensure stable and efficient tuning.
Our experimental results demonstrate that LITune achieves up to a 98% reduction in runtime and a 17-fold increase in throughput.
arXiv Detail & Related papers (2025-02-07T15:22:15Z) - Real-time Indexing for Large-scale Recommendation by Streaming Vector Quantization Retriever [17.156348053402766]
Streaming Vector Quantization model is a new generation of retrieval paradigm.
Streaming VQ attaches items with indexes in real time, granting it immediacy.
As a lightweight and implementation-friendly architecture, streaming VQ has been deployed and replaced all major retrievers in Douyin and Douyin Lite.
arXiv Detail & Related papers (2025-01-15T10:09:15Z) - Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a model-agnostic framework for finding sequences of models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
We find that, on average, a 2% reduction in predictive power leads to a 30% improvement in stability.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Accelerating String-Key Learned Index Structures via Memoization-based Incremental Training [16.93830041971135]
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes.
They require frequent retrainings of their models to incorporate the changes introduced by update queries.
We develop an algorithm- hardware co-designed string-key learned index system, dubbed SIA.
arXiv Detail & Related papers (2024-03-18T04:44:00Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Enhancing Few-shot NER with Prompt Ordering based Data Augmentation [59.69108119752584]
We propose a Prompt Ordering based Data Augmentation (PODA) method to improve the training of unified autoregressive generation frameworks.
Experimental results on three public NER datasets and further analyses demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-05-19T16:25:43Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - DSI++: Updating Transformer Memory with New Documents [95.70264288158766]
We introduce DSI++, a continual learning challenge for DSI to incrementally index new documents.
We show that continual indexing of new documents leads to considerable forgetting of previously indexed documents.
We introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task.
arXiv Detail & Related papers (2022-12-19T18:59:34Z) - Class-Incremental Learning by Knowledge Distillation with Adaptive
Feature Consolidation [39.97128550414934]
We present a novel class incremental learning approach based on deep neural networks.
It continually learns new tasks with limited memory for storing examples in the previous tasks.
Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models.
arXiv Detail & Related papers (2022-04-02T16:30:04Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.