Classifiers in High Dimensional Hilbert Metrics
- URL: http://arxiv.org/abs/2601.13410v1
- Date: Mon, 19 Jan 2026 21:42:02 GMT
- Title: Classifiers in High Dimensional Hilbert Metrics
- Authors: Aditya Acharya, Auguste H. Gezalyan, David M. Mount,
- Abstract summary: Classifying points in high dimensional spaces is a fundamental geometric problem in machine learning.<n>We first present an efficient LP-based algorithm in the Hilbert metric for the large-margin SVM problem.<n>We also present efficient algorithms for the soft-margin SVM problem and for nearest neighbor-based classification in the Hilbert metric.
- Score: 3.2990108763152164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classifying points in high dimensional spaces is a fundamental geometric problem in machine learning. In this paper, we address classifying points in the $d$-dimensional Hilbert polygonal metric. The Hilbert metric is a generalization of the Cayley-Klein hyperbolic distance to arbitrary convex bodies and has a diverse range of applications in machine learning and convex geometry. We first present an efficient LP-based algorithm in the metric for the large-margin SVM problem. Our algorithm runs in time polynomial to the number of points, bounding facets, and dimension. This is a significant improvement on previous works, which either provide no theoretical guarantees on running time, or suffer from exponential runtime. We also consider the closely related Funk metric. We also present efficient algorithms for the soft-margin SVM problem and for nearest neighbor-based classification in the Hilbert metric.
Related papers
- The Geometry of the Pivot: A Note on Lazy Pivoted Cholesky and Farthest Point Sampling [0.0]
The Pivoted Cholesky decomposition is a standard tool for this task.<n>We elucidate the geometric interpretation of the algorithm within the Reproducing Kernel Hilbert Space.<n>We provide a concise derivation and a minimalist Python implementation to bridge the gap between theory and practice.
arXiv Detail & Related papers (2026-01-07T08:44:03Z) - Mathematical Programming Algorithms for Convex Hull Approximation with a Hyperplane Budget [0.0]
We search for a convex polyhedron with at most K faces, containing all the positive points and no negative point.
The problem is known in the literature for pure convex polyhedral approximation.
Our interest stems from its possible applications in constraint learning.
arXiv Detail & Related papers (2024-07-24T15:08:52Z) - Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.<n>We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.<n>We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Computing Star Discrepancies with Numerical Black-Box Optimization
Algorithms [56.08144272945755]
We compare 8 popular numerical black-box optimization algorithms on the $L_infty$ star discrepancy problem.
We show that all used solvers perform very badly on a large majority of the instances.
We suspect that state-of-the-art numerical black-box optimization techniques fail to capture the global structure of the problem.
arXiv Detail & Related papers (2023-06-29T14:57:56Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - Correlation Clustering in Constant Many Parallel Rounds [42.10280805559555]
Correlation clustering is a central topic in unsupervised learning, with many applications in ML and data mining.
In this work we propose a massively parallel computation (MPC) algorithm for this problem that is considerably faster than prior work.
Our algorithm uses machines with memory sublinear in the number of nodes in the graph and returns a constant approximation while running only for a constant number of rounds.
arXiv Detail & Related papers (2021-06-15T21:45:45Z) - On Agnostic PAC Learning using $\mathcal{L}_2$-polynomial Regression and
Fourier-based Algorithms [10.66048003460524]
We develop a framework using Hilbert spaces as a proxy to analyze PAC learning problems with structural properties.
We demonstrate that PAC learning with 0-1 loss is equivalent to an optimization in the Hilbert space domain.
arXiv Detail & Related papers (2021-02-11T21:28:55Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Space-filling Curves for High-performance Data Mining [0.0]
Space-filling curves like the Hilbert-curve, Peano-curve and Z-order map natural or real numbers from a two or higher dimensional space to a one dimensional space preserving locality.
They have numerous applications like search structures, computer graphics, numerical simulation, cryptographics and can be used to make various algorithms cache-oblivious.
arXiv Detail & Related papers (2020-08-04T16:41:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.