kNN-Graph: An adaptive graph model for $k$-nearest neighbors
- URL: http://arxiv.org/abs/2601.16509v1
- Date: Fri, 23 Jan 2026 07:15:53 GMT
- Title: kNN-Graph: An adaptive graph model for $k$-nearest neighbors
- Authors: Jiaye Li, Gang Chen, Hang Xu, Shichao Zhang,
- Abstract summary: We present an adaptive graph model that decouples inference latency from computational complexity.<n>We demonstrate this architecture significantly accelerates inference speeds, achieving real-time performance, without compromising classification accuracy.
- Score: 17.882218619659756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The k-nearest neighbors (kNN) algorithm is a cornerstone of non-parametric classification in artificial intelligence, yet its deployment in large-scale applications is persistently constrained by the computational trade-off between inference speed and accuracy. Existing approximate nearest neighbor solutions accelerate retrieval but often degrade classification precision and lack adaptability in selecting the optimal neighborhood size (k). Here, we present an adaptive graph model that decouples inference latency from computational complexity. By integrating a Hierarchical Navigable Small World (HNSW) graph with a pre-computed voting mechanism, our framework completely transfers the computational burden of neighbor selection and weighting to the training phase. Within this topological structure, higher graph layers enable rapid navigation, while lower layers encode precise, node-specific decision boundaries with adaptive neighbor counts. Benchmarking against eight state-of-the-art baselines across six diverse datasets, we demonstrate that this architecture significantly accelerates inference speeds, achieving real-time performance, without compromising classification accuracy. These findings offer a scalable, robust solution to the long-standing inference bottleneck of kNN, establishing a new structural paradigm for graph-based nonparametric learning.
Related papers
- ECHO: Encoding Communities via High-order Operators [8.970269049715933]
Community detection in attributed networks faces a fundamental divide: topological algorithms ignore semantic features, while Graph Neural Networks (GNNs) encounter devastating computational bottlenecks.<n>We introduce ECHO, a scalable, self supervised architecture that reframes community detection as an adaptive, multi scale diffusion process.<n>ECHO completely bypasses traditional O(N2) memory bottlenecks without sacrificing the mathematical precision of global gradients.
arXiv Detail & Related papers (2026-02-25T22:14:29Z) - DeltaGNN: Graph Neural Network with Information Flow Control [5.563171090433323]
Graph Neural Networks (GNNs) are designed to process graph-structured data through neighborhood aggregations in the message passing process.<n>Message-passing enables GNNs to understand short-range spatial interactions, but also causes them to suffer from over-smoothing and over-squashing.<n>We propose a mechanism called emph information flow control to address over-smoothing and over-squashing with linear computational overhead.<n>We benchmark our model across 10 real-world datasets, including graphs with varying sizes, topologies, densities, and homophilic ratios, showing superior performance
arXiv Detail & Related papers (2025-01-10T14:34:20Z) - Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.<n>We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.<n>We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers [24.109721494781592]
Graph Neural Networks (GNNs) are still under exploration, presenting a stark disparity to its broad edge adoptions.
This paper studies the cost optimization for distributed GNN processing over a multi-tier heterogeneous edge network.
We show that our approach achieves superior performance over de facto baselines with more than 95.8% cost eduction in a fast convergence speed.
arXiv Detail & Related papers (2022-10-31T13:03:16Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Beyond kNN: Adaptive, Sparse Neighborhood Graphs via Optimal Transport [0.1933681537640272]
Nearest neighbour graphs are widely used to capture the geometry or topology of a dataset.
One of the most common strategies to construct such a graph is based on selecting a fixed number k of nearest neighbours (kNN) for each point.
We propose a simple approach to construct an adaptive neighbourhood graph from a single parameter, based on quadratically regularised optimal transport.
arXiv Detail & Related papers (2022-08-01T04:24:58Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Spatial-spectral Hyperspectral Image Classification via Multiple Random
Anchor Graphs Ensemble Learning [88.60285937702304]
This paper proposes a novel spatial-spectral HSI classification method via multiple random anchor graphs ensemble learning (RAGE)
Firstly, the local binary pattern is adopted to extract the more descriptive features on each selected band, which preserves local structures and subtle changes of a region.
Secondly, the adaptive neighbors assignment is introduced in the construction of anchor graph, to reduce the computational complexity.
arXiv Detail & Related papers (2021-03-25T09:31:41Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.