An arithmetic method algorithm optimizing k-nearest neighbors compared to regression algorithms and evaluated on real world data sources
- URL: http://arxiv.org/abs/2602.08577v1
- Date: Mon, 09 Feb 2026 12:17:16 GMT
- Title: An arithmetic method algorithm optimizing k-nearest neighbors compared to regression algorithms and evaluated on real world data sources
- Authors: Theodoros Anagnostopoulos, Evanthia Zervoudi, Christos Anagnostopoulos, Apostolos Christopoulos, Bogdan Wierzbinski,
- Abstract summary: k-Nearest Neighbors (k-NN) is a common non-parametric regression algorithm.<n>An Arithmetic Method Algorithm (AMA) is adopted to assess the efficiency of the introduced arithmetic method.<n>An Arithmetic Method Regression (AMR) algorithm is proposed as an optimization of k-NN adopting the potentiality of AMA.
- Score: 1.7782261079530317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear regression analysis focuses on predicting a numeric regressand value based on certain regressor values. In this context, k-Nearest Neighbors (k-NN) is a common non-parametric regression algorithm, which achieves efficient performance when compared with other algorithms in literature. In this research effort an optimization of the k-NN algorithm is proposed by exploiting the potentiality of an introduced arithmetic method, which can provide solutions for linear equations involving an arbitrary number of real variables. Specifically, an Arithmetic Method Algorithm (AMA) is adopted to assess the efficiency of the introduced arithmetic method, while an Arithmetic Method Regression (AMR) algorithm is proposed as an optimization of k-NN adopting the potentiality of AMA. Such algorithm is compared with other regression algorithms, according to an introduced optimal inference decision rule, and evaluated on certain real world data sources, which are publicly available. Results are promising since the proposed AMR algorithm has comparable performance with the other algorithms, while in most cases it achieves better performance than the k-NN. The output results indicate that introduced AMR is an optimization of k-NN.
Related papers
- Rank-One Modified Value Iteration [2.5665716218583965]
We provide a novel algorithm for solving planning and learning problems of Markov decision processes.<n>We show that the proposed algorithm consistently outperforms first-order algorithms and their accelerated versions for both planning and learning problems.
arXiv Detail & Related papers (2025-05-03T14:06:50Z) - Preference-Based Gradient Estimation for ML-Guided Approximate Combinatorial Optimization [15.102119312523696]
Combinatorial optimization (CO) problems arise across a broad spectrum of domains, including medicine, logistics, and manufacturing.<n>We propose a learning-based approach that enhances existing non-learned approximation algorithms for CO.<n>Our method is trained end-to-end in a self-supervised fashion, using a novel gradient estimation scheme that treats the approximation algorithm as a black box.
arXiv Detail & Related papers (2025-02-26T18:23:07Z) - A Full Adagrad algorithm with O(Nd) operations [4.389938747401259]
The study offers efficient and practical algorithms for large-scale applications.<n>This innovative strategy significantly reduces the complexity and resource demands typically associated with full-matrix methods.
arXiv Detail & Related papers (2024-05-03T08:02:08Z) - Conditional Mean and Variance Estimation via \textit{k}-NN Algorithm with Automated Variance Selection [9.943131787772323]
We introduce a novel textitk-nearest neighbor (textitk-NN) regression method for joint estimation of the conditional mean and variance.<n>The proposed algorithm preserves the computational efficiency and manifold-learning capabilities of classical non-parametric textitk-NN models.
arXiv Detail & Related papers (2024-02-02T18:54:18Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Average-Reward Off-Policy Policy Evaluation with Function Approximation [66.67075551933438]
We consider off-policy policy evaluation with function approximation in average-reward MDPs.
bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad.
We propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting.
arXiv Detail & Related papers (2021-01-08T00:43:04Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.