Continuous Multidimensional Scaling
- URL: http://arxiv.org/abs/2402.04436v3
- Date: Wed, 11 Dec 2024 17:22:54 GMT
- Title: Continuous Multidimensional Scaling
- Authors: Michael W. Trosset, Carey E. Priebe,
- Abstract summary: Multidimensional scaling (MDS) is the act of embedding proximity information about a set of $n$ objects in $d$-dimensional Euclidean space.<n>Here we are concerned with embedding dissimilarities by minimizing Kruskal's (1964) raw stress criterion.<n>Standard results from the theory of point-to-set maps can be used to establish that, if $n$ is fixed and a sequence of dissimilarity converges, then the limit of their embedded structures is the embedded structure of the limiting dissimilarity matrix.<n>We present such a reformulation, em continuous MDS.
- Score: 14.537835814280221
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multidimensional scaling (MDS) is the act of embedding proximity information about a set of $n$ objects in $d$-dimensional Euclidean space. As originally conceived by the psychometric community, MDS was concerned with embedding a fixed set of proximities associated with a fixed set of objects. Modern concerns, e.g., that arise in developing asymptotic theories for statistical inference on random graphs, more typically involve studying the limiting behavior of a sequence of proximities associated with an increasing set of objects. Here we are concerned with embedding dissimilarities by minimizing Kruskal's (1964) raw stress criterion. Standard results from the theory of point-to-set maps can be used to establish that, if $n$ is fixed and a sequence of dissimilarity matrices converges, then the limit of their embedded structures is the embedded structure of the limiting dissimilarity matrix. But what if $n$ increases? It then becomes necessary to reformulate MDS so that the entire sequence of embedding problems can be viewed as a sequence of optimization problems in a fixed space. We present such a reformulation, {\em continuous MDS}. Within the continuous MDS framework, we derive two $L^p$ consistency results, one for embedding without constraints on the configuration, the other for embedding subject to {\em approximate Lipschitz constraints}\/ that encourage smoothness of the embedding function. The latter approach, {\em Approximate Lipschitz Embedding}\/ (ALE) is new. Finally, we demonstrate that embedded structures produced by ALE can be interpolated in a way that results in uniform convergence.
Related papers
- $\mathbb{R}^{2k}$ is Theoretically Large Enough for Embedding-based Top-$k$ Retrieval [99.8640829555514]
This paper studies the minimal dimension required to embed subset memberships ($m$ elements and $mchoose k$ subsets of at most $k$ elements) into vector spaces, denoted as Minimal Embeddable Dimension (MED)<n>The tight bounds of MED are derived theoretically and supported empirically for various notions of "distances" or "similarities," including the $ell$ metric, inner product and cosine similarity.
arXiv Detail & Related papers (2026-01-28T18:45:43Z) - A Discrepancy-Based Perspective on Dataset Condensation [12.013494108473422]
We present a unified framework that encompasses existing dataset condensation (DC) methods and extend the task-specific notion of DC to a more general and formal definition.<n>Our framework broadens the objective of DC beyond generalization, accommodating additional objectives such as robustness, privacy, and other desirable properties.
arXiv Detail & Related papers (2025-09-12T16:00:49Z) - Demystifying Linear MDPs and Novel Dynamics Aggregation Framework [8.087699764574788]
In linear MDPs, the feature dimension $d$ is lower bounded by $S/U$ in order to aptly represent transition probabilities.
We propose a novel structural aggregation framework based on dynamics, named as the "dynamics aggregation"
Our proposed algorithm exhibits statistical efficiency, achieving a regret of $ tildeO ( d_psi3/2 H3/2sqrt T)$, where $d_psi$ represents the feature dimension of aggregated subMDPs.
arXiv Detail & Related papers (2024-10-31T16:21:41Z) - Joint Learning of Linear Dynamical Systems under Smoothness Constraints [5.2395896768723045]
We consider the problem of joint learning of multiple linear dynamical systems.
In particular, we show conditions under which the mean-squared error bounds on the mean-squared error (MSE) converges to zero as $m$ increases.
arXiv Detail & Related papers (2024-06-03T08:29:42Z) - Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms [18.425648833592312]
We show that tBMM converges to an $ilon$-stationary point within $O(epsilon-2)$.
Under a mild iteration, the results still hold when the subproblem is solved in iterations in each provided the total optimality gap is bounded.
arXiv Detail & Related papers (2024-05-05T22:53:14Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Improved Algorithm for Adversarial Linear Mixture MDPs with Bandit
Feedback and Unknown Transition [71.33787410075577]
We study reinforcement learning with linear function approximation, unknown transition, and adversarial losses.
We propose a new algorithm that attains an $widetildeO(dsqrtHS3K + sqrtHSAK)$ regret with high probability.
arXiv Detail & Related papers (2024-03-07T15:03:50Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems [61.85150061213987]
We study the generalized low-rank matrix bandit problem, proposed in citelu2021low under the Generalized Linear Model (GLM) framework.
To overcome the computational infeasibility and theoretical restrain of existing algorithms, we first propose the G-ESTT framework.
We show that G-ESTT can achieve the $tildeO(sqrt(d_1+d_2)3/2Mr3/2T)$ bound of regret while G-ESTS can achineve the $tildeO
arXiv Detail & Related papers (2024-01-14T14:14:19Z) - A Block Coordinate Descent Method for Nonsmooth Composite Optimization under Orthogonality Constraints [7.9047096855236125]
We show that textbfOBCD offers stronger optimality than standard critical points.<n>We also demonstrate the non-erg convergence rate of textbfOBCD.
arXiv Detail & Related papers (2023-04-07T13:44:59Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Rank-1 Matrix Completion with Gradient Descent and Small Random
Initialization [15.127728811011245]
We show that implicit regularization of GD plays a critical role in analysis.
We observe that implicit regularization GD plays a critical role in affordable analysis.
arXiv Detail & Related papers (2022-12-19T12:05:37Z) - Discrete Bulk Reconstruction [4.467248776406005]
We show that the AdS/CFT can be exponentially complex if one wants to reconstruct regions such as the interiors of black holes.
We also make initial progress on the case of multiple 1D boundaries, where the boundaries could be connected via wormholes.
arXiv Detail & Related papers (2022-10-27T16:39:29Z) - Provably Efficient Model-Free Constrained RL with Linear Function
Approximation [4.060731229044571]
We develop the first model-free, simulator-free algorithm that achieves a sublinear regret and a sublinear constraint violation even in large-scale systems.
Our results are achieved via novel adaptations of the standard LSVI-UCB algorithms.
arXiv Detail & Related papers (2022-06-23T17:54:31Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Recurrently Predicting Hypergraphs [30.092688729343678]
A problem arises from the number of possible multi-way relationships, or hyperedges, scaling in $mathcalO(2n)$ for a set of $n$ elements.
We propose a recurrent hypergraph neural network that predicts the incidence matrix by iteratively refining an initial guess of the solution.
arXiv Detail & Related papers (2021-06-26T01:12:41Z) - Problem Dependent View on Structured Thresholding Bandit Problems [73.70176003598449]
We investigate the problem dependent regime in the Thresholding Bandit problem (TBP)
The objective of the learner is to output, at the end of a sequential game, the set of arms whose means are above a given threshold.
We provide upper and lower bounds for the probability of error in both the concave and monotone settings.
arXiv Detail & Related papers (2021-06-18T15:01:01Z) - Tight High Probability Bounds for Linear Stochastic Approximation with
Fixed Stepsize [41.38162218102825]
This paper provides a non-asymptotic analysis of linear approximation (LSA) algorithms with fixed stepsize.
We derive high probability bounds on the performance of LSA under weaker conditions on the sequence $(bf A_n, bf b_n): n in mathbbN*$.
We show that our conclusions cannot be improved without additional assumptions on the sequence $bf A_n: n in mathbbN*$.
arXiv Detail & Related papers (2021-06-02T16:10:37Z) - Proximal Gradient Descent-Ascent: Variable Convergence under K{\L}
Geometry [49.65455534654459]
The finite descent-ascent parameters (GDA) has been widely applied to solve minimax optimization problems.
This paper fills such a gap by studying the convergence of the KL-Lized geometry.
arXiv Detail & Related papers (2021-02-09T05:35:53Z) - A Simple Convergence Proof of Adam and Adagrad [74.24716715922759]
We show a proof of convergence between the Adam Adagrad and $O(d(N)/st)$ algorithms.
Adam converges with the same convergence $O(d(N)/st)$ when used with the default parameters.
arXiv Detail & Related papers (2020-03-05T01:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.