Bridging Arbitrary and Tree Metrics via Differentiable Gromov Hyperbolicity
- URL: http://arxiv.org/abs/2505.21073v2
- Date: Wed, 28 May 2025 09:07:43 GMT
- Title: Bridging Arbitrary and Tree Metrics via Differentiable Gromov Hyperbolicity
- Authors: Pierre Houedry, Nicolas Courty, Florestan Martin-Baillon, Laetitia Chapel, Titouan Vayer,
- Abstract summary: Given an arbitrary metric space, its deviation from a tree metric can be quantified by Gromov's $delta$-hyperbolicity.<n>We introduce a novel differentiable optimization framework, coined DeltaZero, that solves this problem.<n>Our method consistently achieves state-of-the-art distortion.
- Score: 13.751664797063208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trees and the associated shortest-path tree metrics provide a powerful framework for representing hierarchical and combinatorial structures in data. Given an arbitrary metric space, its deviation from a tree metric can be quantified by Gromov's $\delta$-hyperbolicity. Nonetheless, designing algorithms that bridge an arbitrary metric to its closest tree metric is still a vivid subject of interest, as most common approaches are either heuristical and lack guarantees, or perform moderately well. In this work, we introduce a novel differentiable optimization framework, coined DeltaZero, that solves this problem. Our method leverages a smooth surrogate for Gromov's $\delta$-hyperbolicity which enables a gradient-based optimization, with a tractable complexity. The corresponding optimization procedure is derived from a problem with better worst case guarantees than existing bounds, and is justified statistically. Experiments on synthetic and real-world datasets demonstrate that our method consistently achieves state-of-the-art distortion.
Related papers
- Fast unsupervised ground metric learning with tree-Wasserstein distance [14.235762519615175]
unsupervised ground metric learning approaches have been introduced.<n>One promising option employs Wasserstein singular vectors (WSVs), which emerge when computing optimal transport distances between features and samples simultaneously.<n>We propose to augment the WSV method by embedding samples and features on trees, on which we compute the tree-Wasserstein distance (TWD)
arXiv Detail & Related papers (2024-11-11T23:21:01Z) - Free Lunch in the Forest: Functionally-Identical Pruning of Boosted Tree Ensembles [45.962492329047215]
We introduce a method to prune a tree ensemble into a reduced version that is "functionally identical" to the original model.<n>We formalize the problem of functionally identical pruning on ensembles, introduce an exact optimization model, and provide a fast yet highly effective method to prune large ensembles.
arXiv Detail & Related papers (2024-08-28T23:15:46Z) - Optimal Transport for Measures with Noisy Tree Metric [29.950797721275574]
We study optimal transport problem for probability measures supported on a tree metric space.
In general, this approach is hard to compute, even for measures supported in one space.
We show that the robust OT satisfies the metric property and is negative definite.
arXiv Detail & Related papers (2023-10-20T16:56:08Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Data-driven abstractions via adaptive refinements and a Kantorovich
metric [extended version] [56.94699829208978]
We introduce an adaptive refinement procedure for smart, and scalable abstraction of dynamical systems.
In order to learn the optimal structure, we define a Kantorovich-inspired metric between Markov chains.
We show that our method yields a much better computational complexity than using classical linear programming techniques.
arXiv Detail & Related papers (2023-03-30T11:26:40Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Learning Ultrametric Trees for Optimal Transport Regression [10.524752369156337]
We find an optimal tree structure for a given discrete metric space.
One of our key ideas is to cast the problem in ultrametric spaces.
arXiv Detail & Related papers (2022-10-21T22:54:42Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - HyperAid: Denoising in hyperbolic spaces for tree-fitting and
hierarchical clustering [36.738414547278154]
We propose a new approach to treemetric denoising (HyperAid) in hyperbolic spaces.
It transforms original data into data that is more'' tree-like, when evaluated in terms of Gromov's $delta$ hyperbolicity.
We integrate HyperAid with schemes for enforcing nonnegative edge-weights.
arXiv Detail & Related papers (2022-05-19T17:33:16Z) - Rethinking Learnable Tree Filter for Generic Feature Transform [71.77463476808585]
Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation.
To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term.
For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and-whistles.
arXiv Detail & Related papers (2020-12-07T07:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.