CRINN: Contrastive Reinforcement Learning for Approximate Nearest Neighbor Search
- URL: http://arxiv.org/abs/2508.02091v1
- Date: Mon, 04 Aug 2025 05:57:46 GMT
- Title: CRINN: Contrastive Reinforcement Learning for Approximate Nearest Neighbor Search
- Authors: Xiaoya Li, Xiaofei Sun, Albert Wang, Chris Shum, Jiwei Li,
- Abstract summary: We present CRINN, a new paradigm for Approximate nearest-neighbor search (ANNS) algorithms.<n>CRINN treats ANNS optimization as a reinforcement learning problem where execution speed serves as the reward signal.<n>Our experimental evaluation demonstrates CRINN's effectiveness across six widely-used NNS benchmark datasets.
- Score: 35.06696271451966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Approximate nearest-neighbor search (ANNS) algorithms have become increasingly critical for recent AI applications, particularly in retrieval-augmented generation (RAG) and agent-based LLM applications. In this paper, we present CRINN, a new paradigm for ANNS algorithms. CRINN treats ANNS optimization as a reinforcement learning problem where execution speed serves as the reward signal. This approach enables the automatic generation of progressively faster ANNS implementations while maintaining accuracy constraints. Our experimental evaluation demonstrates CRINN's effectiveness across six widely-used NNS benchmark datasets. When compared against state-of-the-art open-source ANNS algorithms, CRINN achieves best performance on three of them (GIST-960-Euclidean, MNIST-784-Euclidean, and GloVe-25-angular), and tied for first place on two of them (SIFT-128-Euclidean and GloVe-25-angular). The implications of CRINN's success reach well beyond ANNS optimization: It validates that LLMs augmented with reinforcement learning can function as an effective tool for automating sophisticated algorithmic optimizations that demand specialized knowledge and labor-intensive manual refinement.Code can be found at https://github.com/deepreinforce-ai/CRINN
Related papers
- Dynamic Location Search for Identifying Maximum Weighted Independent Sets in Complex Networks [0.47248250311484113]
We introduce a novel and efficient algorithm for solving the maximum weighted independent set (MWIS) problem.<n>Given the NP-hard nature of the MWIS problem, our proposed algorithm, DynLS, incorporates three key innovations to solve it effectively.<n>Our experimental results demonstrate DynLS's superior performance, consistently delivering high-quality solutions within 1000 seconds.
arXiv Detail & Related papers (2025-05-07T10:35:53Z) - A Framework to Enable Algorithmic Design Choice Exploration in DNNs [0.0]
We introduce an open source framework which provides easy to use fine grain algorithmic control for deep learning networks (DNNs)
The framework enables users to implement and select their own algorithms to be utilized by the DNN.
The framework incurs no additional performance overhead, meaning that performance depends solely on the algorithms chosen by the user.
arXiv Detail & Related papers (2024-10-10T18:41:56Z) - RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation [54.707460684650584]
Large Language Models (LLMs) demonstrate human-level capabilities in dialogue, reasoning, and knowledge retention.
Current research addresses this bottleneck by equipping LLMs with external knowledge, a technique known as Retrieval Augmented Generation (RAG)
RAGLAB is a modular and research-oriented open-source library that reproduces 6 existing algorithms and provides a comprehensive ecosystem for investigating RAG algorithms.
arXiv Detail & Related papers (2024-08-21T07:20:48Z) - Learning-Enhanced Neighborhood Selection for the Vehicle Routing Problem with Time Windows [0.0]
We propose to integrate machine learning (ML) into Large Neighborhood Search (LNS) to decide which parts of the solution should be destroyed and repaired in each iteration of LNS.
Our approach is universally applicable, i.e., it can be applied to any LNS algorithm to amplify the workings of the destroy algorithm.
arXiv Detail & Related papers (2024-03-13T12:08:27Z) - Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning [39.40838358438744]
Linear Programs (ILPs) are powerful tools for modeling and solving a large number of optimization problems.
Large Neighborhood Search (LNS), as a algorithm, can find high quality solutions to ILPs faster than Branch and Bound.
We propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics.
arXiv Detail & Related papers (2023-02-03T07:15:37Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - A General Large Neighborhood Search Framework for Solving Integer Linear
Programs [46.62993477453986]
We focus on solving integer programs, and ground our approach in the large neighborhood search (SLN) paradigm.
We show that our LNS framework can significantly outperform compared to state-of-the-art commercial solvers such as Gurobi.
arXiv Detail & Related papers (2020-03-29T23:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.