Worst-case Performance of Popular Approximate Nearest Neighbor Search
Implementations: Guarantees and Limitations
- URL: http://arxiv.org/abs/2310.19126v1
- Date: Sun, 29 Oct 2023 19:25:48 GMT
- Title: Worst-case Performance of Popular Approximate Nearest Neighbor Search
Implementations: Guarantees and Limitations
- Authors: Piotr Indyk, Haike Xu
- Abstract summary: We study the worst-case performance of graph-based approximate nearest neighbor search algorithms.
For DiskANN, we show that its "slow preprocessing" version provably supports approximate nearest neighbor search query.
We present a family of instances on which the empirical query time required to achieve a "reasonable" accuracy is linear in instance size.
- Score: 20.944914202453962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-based approaches to nearest neighbor search are popular and powerful
tools for handling large datasets in practice, but they have limited
theoretical guarantees. We study the worst-case performance of recent
graph-based approximate nearest neighbor search algorithms, such as HNSW, NSG
and DiskANN. For DiskANN, we show that its "slow preprocessing" version
provably supports approximate nearest neighbor search query with constant
approximation ratio and poly-logarithmic query time, on data sets with bounded
"intrinsic" dimension. For the other data structure variants studied, including
DiskANN with "fast preprocessing", HNSW and NSG, we present a family of
instances on which the empirical query time required to achieve a "reasonable"
accuracy is linear in instance size. For example, for DiskANN, we show that the
query procedure can take at least $0.1 n$ steps on instances of size $n$ before
it encounters any of the $5$ nearest neighbors of the query.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Early Exit Strategies for Approximate k-NN Search in Dense Retrieval [10.48678957367324]
We build upon state-of-the-art for early exit A-kNN and propose an unsupervised method based on the notion of patience.
We show that our techniques improve the A-kNN efficiency with up to 5x speedups while achieving negligible effectiveness losses.
arXiv Detail & Related papers (2024-08-09T10:17:07Z) - Group Testing for Accurate and Efficient Range-Based Near Neighbor Search for Plagiarism Detection [2.3814052021083354]
This work presents an adaptive group testing framework for the range-based high dimensional near neighbor search problem.
Our method efficiently marks each item in a database as neighbor or non-neighbor of a query point, based on a cosine distance threshold without exhaustive search.
We show that, using softmax-based features, our method achieves a more than ten-fold speed-up over exhaustive search with no loss of accuracy.
arXiv Detail & Related papers (2023-11-05T06:12:03Z) - A Theoretical Analysis Of Nearest Neighbor Search On Approximate Near
Neighbor Graph [51.880164098926166]
Graph-based algorithms have demonstrated state-of-the-art performance in the nearest neighbor search (NN-Search) problem.
There exists a practice-to-theory gap in the graph-based NN-Search algorithms.
We present theoretical guarantees of solving NN-Search via greedy search on ANN-Graph for low dimensional and dense vectors.
arXiv Detail & Related papers (2023-03-10T21:18:34Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - FINGER: Fast Inference for Graph-based Approximate Nearest Neighbor
Search [20.928821121591493]
We propose FINGER, a fast inference method to achieve efficient graph search.
FINGER approximates the distance function by estimating angles between neighboring residual vectors with low-rank bases and distribution matching.
Empirically, accelerating a popular graph-based method named HNSW by FINGER is shown to outperform existing graph-based methods by 20%-60% across different benchmark datasets.
arXiv Detail & Related papers (2022-06-22T22:30:46Z) - IRLI: Iterative Re-partitioning for Learning to Index [104.72641345738425]
Methods have to trade between obtaining high accuracy while maintaining load balance and scalability in distributed settings.
We propose a novel approach called IRLI, which iteratively partitions the items by learning the relevant buckets directly from the query-item relevance data.
We mathematically show that IRLI retrieves the correct item with high probability under very natural assumptions and provides superior load balancing.
arXiv Detail & Related papers (2021-03-17T23:13:25Z) - Leveraging Reinforcement Learning for evaluating Robustness of KNN
Search Algorithms [0.0]
The problem of finding K-nearest neighbors in the given dataset for a given query point has been worked upon since several years.
In this paper, we survey some novel K-Nearest Neighbor Search approaches that tackles the problem of Search from the perspectives of computations.
In order to evaluate the robustness of a KNNS approach against adversarial points, we propose a generic Reinforcement Learning based framework for the same.
arXiv Detail & Related papers (2021-02-10T16:10:58Z) - Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on
Higher-Order Voronoi Diagrams [69.4411417775822]
Adversarial examples are a widely studied phenomenon in machine learning models.
We propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification.
arXiv Detail & Related papers (2020-11-19T08:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.