Convexity-Driven Projection for Point Cloud Dimensionality Reduction
- URL: http://arxiv.org/abs/2509.22043v1
- Date: Fri, 26 Sep 2025 08:25:12 GMT
- Title: Convexity-Driven Projection for Point Cloud Dimensionality Reduction
- Authors: Suman Sanyal,
- Abstract summary: We propose Convexity-Driven detourile (CDP) for dimensionality reduction of point clouds.<n>CDP is a boundary-free linear method for dimensionality reduction of point clouds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Convexity-Driven Projection (CDP), a boundary-free linear method for dimensionality reduction of point clouds that targets preserving detour-induced local non-convexity. CDP builds a $k$-NN graph, identifies admissible pairs whose Euclidean-to-shortest-path ratios are below a threshold, and aggregates their normalized directions to form a positive semidefinite non-convexity structure matrix. The projection uses the top-$k$ eigenvectors of the structure matrix. We give two verifiable guarantees. A pairwise a-posteriori certificate that bounds the post-projection distortion for each admissible pair, and an average-case spectral bound that links expected captured direction energy to the spectrum of the structure matrix, yielding quantile statements for typical distortion. Our evaluation protocol reports fixed- and reselected-pairs detour errors and certificate quantiles, enabling practitioners to check guarantees on their data.
Related papers
- Regularized Online RLHF with Generalized Bilinear Preferences [68.44113000390544]
We consider the problem of contextual online RLHF with general preferences.<n>We adopt the Generalized Bilinear Preference Model to capture preferences via low-rank, skew-symmetric matrices.<n>We prove that the dual gap of the greedy policy is bounded by the square of the estimation error.
arXiv Detail & Related papers (2026-02-26T15:27:53Z) - Deterministic Zeroth-Order Mirror Descent via Vector Fields with A Posteriori Certification [45.85698554568285]
We develop a deterministic zeroth-order mirror descent framework by replacing gradients with a general vector field.<n>Our analysis provides an evaluation template for last-rate inequality evaluations.<n>Together, these results expose a hidden geometric linking Bregman identities, deterministic certification, and robust conic geometry in zeroth-order mirror descent.
arXiv Detail & Related papers (2026-01-31T10:05:05Z) - An approach to Fisher-Rao metric for infinite dimensional non-parametric information geometry [0.6138671548064355]
Being infinite dimensional, non-parametric information geometry has long faced an "intractability barrier"<n>This paper introduces a novel framework to resolve the intractability with an Orthogonal Decomposition of the Tangent Space.<n>By defining the Information Capture Ratio, we provide a rigorous method for estimating intrinsic dimensionality in high-dimensional data.
arXiv Detail & Related papers (2025-12-25T00:18:41Z) - Unsupervised Conformal Inference: Bootstrapping and Alignment to Control LLM Uncertainty [49.19257648205146]
We propose an unsupervised conformal inference framework for generation.<n>Our gates achieve close-to-nominal coverage and provide tighter, more stable thresholds than split UCP.<n>The result is a label-free, API-compatible gate for test-time filtering.
arXiv Detail & Related papers (2025-09-26T23:40:47Z) - Graph-based Clustering Revisited: A Relaxation of Kernel $k$-Means Perspective [73.18641268511318]
We propose a graph-based clustering algorithm that only relaxes the orthonormal constraint to derive clustering results.<n>To ensure a doubly constraint into a gradient, we transform the non-negative constraint into a class probability parameter.
arXiv Detail & Related papers (2025-09-23T09:14:39Z) - Provable Non-Convex Euclidean Distance Matrix Completion: Geometry, Reconstruction, and Robustness [8.113729514518495]
The Euclidean Distance Matrix Completion problem arises in a broad range of applications, including sensor network localization, molecular robustness, and manifold learning.<n>In this paper, we propose a low-rank matrix completion task over the space of positive semi-definite Gram matrices.<n>The available distance measurements are encoded as expansion coefficients in a non-orthogonal basis, and optimization over the Gram matrix implicitly enforces geometric consistency through nonnegativity and the triangle inequality.
arXiv Detail & Related papers (2025-07-31T18:40:42Z) - Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy [57.54306942529943]
We propose an Approximately Orthogonal Fine-Tuning (AOFT) strategy for representing the low-rank weight matrices.<n>Our method achieves competitive performance across a range of downstream image classification tasks.
arXiv Detail & Related papers (2025-07-17T16:09:05Z) - Euclidean Distance Matrix Completion via Asymmetric Projected Gradient Descent [13.27202712518471]
This paper proposes and analyzes a gradient-type algorithm based on Burer-Monteiro factorization, called the Asymmetric Descented Gradient (APGD)
arXiv Detail & Related papers (2025-04-28T07:13:23Z) - Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing? [14.2377621491791]
Black-box variational inference converges at a geometric (traditionally called "linear") rate under perfect variational family specification.
We also improve existing analysis on the regular closed-form entropy gradient estimators.
arXiv Detail & Related papers (2023-07-27T06:32:43Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.