Intrinsic-Energy Joint Embedding Predictive Architectures Induce Quasimetric Spaces
- URL: http://arxiv.org/abs/2602.12245v1
- Date: Thu, 12 Feb 2026 18:30:27 GMT
- Title: Intrinsic-Energy Joint Embedding Predictive Architectures Induce Quasimetric Spaces
- Authors: Anthony Kobanda, Waris Radji,
- Abstract summary: Joint-Embedding Predictive Architectures (JEPAs) aim to learn representations by predicting target embeddings from context embeddings.<n>Quasimetric Reinforcement Learning (QRL) studies goal-conditioned control through directed distance values (cost-to-go) that support reaching goals under asymmetric dynamics.
- Score: 0.764671395172401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Joint-Embedding Predictive Architectures (JEPAs) aim to learn representations by predicting target embeddings from context embeddings, inducing a scalar compatibility energy in a latent space. In contrast, Quasimetric Reinforcement Learning (QRL) studies goal-conditioned control through directed distance values (cost-to-go) that support reaching goals under asymmetric dynamics. In this short article, we connect these viewpoints by restricting attention to a principled class of JEPA energy functions : intrinsic (least-action) energies, defined as infima of accumulated local effort over admissible trajectories between two states. Under mild closure and additivity assumptions, any intrinsic energy is a quasimetric. In goal-reaching control, optimal cost-to-go functions admit exactly this intrinsic form ; inversely, JEPAs trained to model intrinsic energies lie in the quasimetric value class targeted by QRL. Moreover, we observe why symmetric finite energies are structurally mismatched with one-way reachability, motivating asymmetric (quasimetric) energies when directionality matters.
Related papers
- Thermodynamic Limits of Physical Intelligence [0.3580891736370874]
Modern AI systems achieve remarkable capabilities at the cost of substantial energy consumption.<n>We propose two bits-per-joule metrics under explicit accounting conventions to connect intelligence to physical efficiency.<n>We show how a Landauer-scale closed-cycle benchmark for epiplexity acquisition follows as a corollary of a thermodynamic-learning inequality.
arXiv Detail & Related papers (2026-02-05T09:12:43Z) - Scalable Repeater Architecture for Long-Range Quantum Energy Teleportation in Gapped Systems [0.0]
We propose and analyze a hierarchical quantum repeater architecture adapted for energy teleportation.<n>By orchestrating heralded entanglement generation, iterative entanglement purification, and nested entanglement swapping, our protocol effectively counteracts the fidelity degradation inherent in noisy quantum channels.<n>This proves, for the first time, the physical permissibility and computational tractability of activating vacuum energy at arbitrary distances.
arXiv Detail & Related papers (2026-01-26T10:10:25Z) - Goal Reaching with Eikonal-Constrained Hierarchical Quasimetric Reinforcement Learning [16.84451472788859]
Eikonal-Constrained Quasimetric RL (Eik-QRL) is a continuous-time reformulation of Quasimetric RL based on the Eikonal Partial Differential Equation (PDE)<n>Eik-HiQRL achieves state-of-the-art performance in offline goal-conditioned navigation and yields consistent gains over QRL in manipulation tasks, matching temporal-difference methods.
arXiv Detail & Related papers (2025-12-12T21:37:11Z) - Hierarchy of Qubit Dynamical Maps in the Presence of Symmetry and Coherence [0.0]
We prove that U(1) conservation constrains quantum thermodynamic operations through charge conservation of Pauli strings.<n>Our no-go theorem shows that U(1) dynamics cannot generate local coherence from diagonal thermal states.<n>We demonstrate measurable thermodynamic advantages in work extraction and state distinguishability.
arXiv Detail & Related papers (2025-09-05T04:04:34Z) - ERIS: An Energy-Guided Feature Disentanglement Framework for Out-of-Distribution Time Series Classification [51.07970070817353]
An ideal time series classification (TSC) should be able to capture invariant representations.<n>Current methods are largely unguided, lacking the semantic direction required to isolate truly universal features.<n>We propose an end-to-end Energy-Regularized Information for Shift-Robustness framework to enable guided and reliable feature disentanglement.
arXiv Detail & Related papers (2025-08-19T12:13:41Z) - Recurrent Self-Attention Dynamics: An Energy-Agnostic Perspective from Jacobians [13.435505794863518]
This work aims to relax energy constraints and provide an energy-agnostic characterization of inference dynamics.<n>It reveals that the normalization layer plays an essential role in suppressing the Lipschitzness of SA and the Jacobian's complex eigenvalues.<n>The Jacobian perspective also enables us to develop regularization methods for training and a pseudo-energy for monitoring inference dynamics.
arXiv Detail & Related papers (2025-05-26T03:24:59Z) - Vision-Language Navigation with Energy-Based Policy [66.04379819772764]
Vision-language navigation (VLN) requires an agent to execute actions following human instructions.
We propose an Energy-based Navigation Policy (ENP) to model the joint state-action distribution.
ENP achieves promising performances on R2R, REVERIE, RxR, and R2R-CE.
arXiv Detail & Related papers (2024-10-18T08:01:36Z) - Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning [73.80728148866906]
Quasimetric Reinforcement Learning (QRL) is a new RL method that utilizes quasimetric models to learn optimal value functions.
On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance.
arXiv Detail & Related papers (2023-04-03T17:59:58Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - Goal-Conditioned Q-Learning as Knowledge Distillation [136.79415677706612]
We explore a connection between off-policy reinforcement learning in goal-conditioned settings and knowledge distillation.
We empirically show that this can improve the performance of goal-conditioned off-policy reinforcement learning when the space of goals is high-dimensional.
We also show that this technique can be adapted to allow for efficient learning in the case of multiple simultaneous sparse goals.
arXiv Detail & Related papers (2022-08-28T22:01:10Z) - Targeted free energy estimation via learned mappings [66.20146549150475]
Free energy perturbation (FEP) was proposed by Zwanzig more than six decades ago as a method to estimate free energy differences.
FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions.
One strategy to mitigate this problem, called Targeted Free Energy Perturbation, uses a high-dimensional mapping in configuration space to increase overlap.
arXiv Detail & Related papers (2020-02-12T11:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.