Minimizing Hyperbolic Embedding Distortion with LLM-Guided Hierarchy Restructuring
- URL: http://arxiv.org/abs/2511.20679v1
- Date: Sun, 16 Nov 2025 18:10:20 GMT
- Title: Minimizing Hyperbolic Embedding Distortion with LLM-Guided Hierarchy Restructuring
- Authors: Melika Ayoughi, Pascal Mettes, Paul Groth,
- Abstract summary: The quality of hyperbolic embeddings is tightly coupled to the structure of the input hierarchy.<n>This paper investigates whether Large Language Models (LLMs) have the ability to automatically restructure hierarchies to meet these criteria.<n> Experiments on 16 diverse hierarchies show that LLM-restructured hierarchies consistently yield higher-quality hyperbolic embeddings.
- Score: 19.895748346987435
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Hyperbolic geometry is an effective geometry for embedding hierarchical data structures. Hyperbolic learning has therefore become increasingly prominent in machine learning applications where data is hierarchically organized or governed by hierarchical semantics, ranging from recommendation systems to computer vision. The quality of hyperbolic embeddings is tightly coupled to the structure of the input hierarchy, which is often derived from knowledge graphs or ontologies. Recent work has uncovered that for an optimal hyperbolic embedding, a high branching factor and single inheritance are key, while embedding algorithms are robust to imbalance and hierarchy size. To assist knowledge engineers in reorganizing hierarchical knowledge, this paper investigates whether Large Language Models (LLMs) have the ability to automatically restructure hierarchies to meet these criteria. We propose a prompt-based approach to transform existing hierarchies using LLMs, guided by known desiderata for hyperbolic embeddings. Experiments on 16 diverse hierarchies show that LLM-restructured hierarchies consistently yield higher-quality hyperbolic embeddings across several standard embedding quality metrics. Moreover, we show how LLM-guided hierarchy restructuring enables explainable reorganizations, providing justifications to knowledge engineers.
Related papers
- Provable Learning of Random Hierarchy Models and Hierarchical Shallow-to-Deep Chaining [58.69016084278948]
We consider a hierarchical context-free grammar introduced by arXiv:2307.02129 and conjectured to separate deep and shallow networks.<n>We prove that, under mild conditions, a deep convolutional network can be efficiently trained to learn this function class.
arXiv Detail & Related papers (2026-01-27T16:19:54Z) - A Semantics-Aware Hierarchical Self-Supervised Approach to Classification of Remote Sensing Images [12.282079123411947]
We present a novel Semantics-Aware Hierarchical Consensus (SAHC) method for learning hierarchical features and relationships.<n>The SAHC method is evaluated on three benchmark datasets with different degrees of hierarchical complexity.<n> Experimental results show both the effectiveness of the proposed approach in guiding network learning and the robustness of the hierarchical consensus for remote sensing image classification tasks.
arXiv Detail & Related papers (2025-10-06T15:30:39Z) - Hyperbolic Residual Quantization: Discrete Representations for Data with Latent Hierarchies [48.72319569157807]
Residual Quantization (RQ) is widely used to generate discrete, multitoken representations for hierarchical data.<n>We propose Hyperbolic Residual Quantization (HRQ), which embeds data in a hyperbolic manifold.<n>HRQ imparts an inductive bias that aligns naturally with hierarchical branching.
arXiv Detail & Related papers (2025-05-18T13:14:07Z) - Learning and Evaluating Hierarchical Feature Representations [3.770103075126785]
We propose a novel framework, Hierarchical Composition of Orthogonal Subspaces (Hier-COS)<n>Hier-COS learns to map deep feature embeddings into a vector space that is, by design, consistent with the structure of a given taxonomy tree.<n>We demonstrate that Hier-COS achieves state-of-the-art hierarchical performance across all the datasets while simultaneously beating top-1 accuracy in all but one case.
arXiv Detail & Related papers (2025-03-10T20:59:41Z) - TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning [4.591755344464076]
We introduce TAME Agent Framework (TAG), a framework for constructing fully decentralized hierarchical multi-agent systems.<n>TAG standardizes information flow between levels while preserving loose coupling, allowing for seamless integration of diverse agent types.<n>Our results show that decentralized hierarchical organization enhances both learning speed and final performance, positioning TAG as a promising direction for scalable multi-agent systems.
arXiv Detail & Related papers (2025-02-21T12:52:16Z) - HDT: Hierarchical Document Transformer [70.2271469410557]
HDT exploits document structure by introducing auxiliary anchor tokens and redesigning the attention mechanism into a sparse multi-level hierarchy.
We develop a novel sparse attention kernel that considers the hierarchical structure of documents.
arXiv Detail & Related papers (2024-07-11T09:28:04Z) - Reinforcement Learning with Options and State Representation [105.82346211739433]
This thesis aims to explore the reinforcement learning field and build on existing methods to produce improved ones.
It addresses such goals by decomposing learning tasks in a hierarchical fashion known as Hierarchical Reinforcement Learning.
arXiv Detail & Related papers (2024-03-16T08:30:55Z) - Online Continual Learning on Hierarchical Label Expansion [28.171890301966616]
We propose a novel multi-level hierarchical class incremental task configuration with an online learning constraint, called hierarchical label expansion (HLE)
Our configuration allows a network to first learn coarse-grained classes, with data labels continually expanding to more fine-grained classes in various hierarchy depths.
Our experiments demonstrate that our proposed method can effectively use hierarchy on our HLE setup to improve classification accuracy across all levels of hierarchies.
arXiv Detail & Related papers (2023-08-28T07:42:26Z) - Feudal Graph Reinforcement Learning [18.069747511100132]
Graph-based representations and message-passing modular policies constitute prominent approaches to tackling composable control problems in reinforcement learning (RL)<n>We propose a novel methodology, named Feudal Graph Reinforcement Learning (FGRL), that addresses such challenges by relying on hierarchical RL and a pyramidal message-passing architecture.<n>In particular, FGRL defines a hierarchy of policies where high-level commands are propagated from the top of the hierarchy down through a layered graph structure.
arXiv Detail & Related papers (2023-04-11T09:51:13Z) - Provable Hierarchy-Based Meta-Reinforcement Learning [50.17896588738377]
We analyze HRL in the meta-RL setting, where learner learns latent hierarchical structure during meta-training for use in a downstream task.
We provide "diversity conditions" which, together with a tractable optimism-based algorithm, guarantee sample-efficient recovery of this natural hierarchy.
Our bounds incorporate common notions in HRL literature such as temporal and state/action abstractions, suggesting that our setting and analysis capture important features of HRL in practice.
arXiv Detail & Related papers (2021-10-18T17:56:02Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.