Hereditary Geometric Meta-RL: Nonlocal Generalization via Task Symmetries
- URL: http://arxiv.org/abs/2603.00396v1
- Date: Sat, 28 Feb 2026 00:57:39 GMT
- Title: Hereditary Geometric Meta-RL: Nonlocal Generalization via Task Symmetries
- Authors: Paul Nitschke, Shahriar Talebi,
- Abstract summary: We develop a geometric perspective that endows the task space with a "hereditary geometry" induced by the inherent symmetries of the underlying system.<n>We show that when the task space is inherited from the symmetries of the underlying system, the task space embeds into a subgroup of those symmetries whose actions are linearizable, connected, and compact--properties that enable efficient learning and inference at the test time.
- Score: 3.2228025627337864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-Reinforcement Learning (Meta-RL) commonly generalizes via smoothness in the task encoding. While this enables local generalization around each training task, it requires dense coverage of the task space and leaves richer task space structure untapped. In response, we develop a geometric perspective that endows the task space with a "hereditary geometry" induced by the inherent symmetries of the underlying system. Concretely, the agent reuses a policy learned at the train time by transforming states and actions through actions of a Lie group. This converts Meta-RL into symmetry discovery rather than smooth extrapolation, enabling the agent to generalize to wider regions of the task space. We show that when the task space is inherited from the symmetries of the underlying system, the task space embeds into a subgroup of those symmetries whose actions are linearizable, connected, and compact--properties that enable efficient learning and inference at the test time. To learn these structures, we develop a differential symmetry discovery method. This collapses functional invariance constraints and thereby improves numerical stability and sample efficiency over functional approaches. Empirically, on a two-dimensional navigation task, our method efficiently recovers the ground-truth symmetry and generalizes across the entire task space, while a common baseline generalizes only near training tasks.
Related papers
- Intrinsic Task Symmetry Drives Generalization in Algorithmic Tasks [4.075225553131796]
We identify a consistent three-stage training dynamic underlying grokking.<n>We show that generalization emerges during the symmetry acquisition phase.<n>We introduce a symmetry-based diagnostic that anticipates the onset of generalization and propose strategies to accelerate it.
arXiv Detail & Related papers (2026-03-02T15:19:24Z) - Neural Isometries: Taming Transformations for Equivariant ML [8.203292895010748]
We introduce Neural Isometries, an autoencoder framework which learns to map the observation space to a general-purpose latent space.
We show that a simple off-the-shelf equivariant network operating in the pre-trained latent space can achieve results on par with meticulously-engineered, handcrafted networks.
arXiv Detail & Related papers (2024-05-29T17:24:25Z) - Symmetry Considerations for Learning Task Symmetric Robot Policies [12.856889419651521]
Symmetry is a fundamental aspect of many real-world robotic tasks.
Current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively.
arXiv Detail & Related papers (2024-03-07T09:41:11Z) - Human-Timescale Adaptation in an Open-Ended Task Space [56.55530165036327]
We show that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans.
Our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.
arXiv Detail & Related papers (2023-01-18T15:39:21Z) - Continual Vision-based Reinforcement Learning with Group Symmetries [18.7526848176769]
We introduce a unique Continual Vision-based Reinforcement Learning method that recognizes Group Symmetries, called COVERS.
Our results show that COVERS accurately assigns tasks to their respective groups and significantly outperforms existing methods in terms of generalization capability.
arXiv Detail & Related papers (2022-10-21T23:41:02Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Multi-Agent MDP Homomorphic Networks [100.74260120972863]
In cooperative multi-agent systems, complex symmetries arise between different configurations of the agents and their local observations.
Existing work on symmetries in single agent reinforcement learning can only be generalized to the fully centralized setting.
This paper introduces Multi-Agent MDP Homomorphic Networks, a class of networks that allows distributed execution using only local information.
arXiv Detail & Related papers (2021-10-09T07:46:25Z) - Introducing Symmetries to Black Box Meta Reinforcement Learning [26.338797667571693]
In so-called black-box approaches, the policy and the learning algorithm are jointly represented by a single neural network.
We show that a successful meta RL approach that meta-learns an objective for backpropagation-based learning exhibits certain symmetries.
These symmetries can play an important role in meta-generalisation.
arXiv Detail & Related papers (2021-09-22T15:09:58Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - The Advantage of Conditional Meta-Learning for Biased Regularization and
Fine-Tuning [50.21341246243422]
Biased regularization and fine-tuning are two recent meta-learning approaches.
We propose conditional meta-learning, inferring a conditioning function mapping task's side information into a meta- parameter vector.
We then propose a convex meta-algorithm providing a comparable advantage also in practice.
arXiv Detail & Related papers (2020-08-25T07:32:16Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.