Can Euclidean Symmetry be Leveraged in Reinforcement Learning and
Planning?
- URL: http://arxiv.org/abs/2307.08226v1
- Date: Mon, 17 Jul 2023 04:01:48 GMT
- Title: Can Euclidean Symmetry be Leveraged in Reinforcement Learning and
Planning?
- Authors: Linfeng Zhao, Owen Howell, Jung Yeon Park, Xupeng Zhu, Robin Walters,
and Lawson L.S. Wong
- Abstract summary: In robotic tasks, changes in reference frames typically do not influence the underlying physical properties of the system.
We put forth a theory on that unify prior work on discrete and continuous symmetry in reinforcement learning, planning, and optimal control.
- Score: 5.943193860994729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In robotic tasks, changes in reference frames typically do not influence the
underlying physical properties of the system, which has been known as
invariance of physical laws.These changes, which preserve distance, encompass
isometric transformations such as translations, rotations, and reflections,
collectively known as the Euclidean group. In this work, we delve into the
design of improved learning algorithms for reinforcement learning and planning
tasks that possess Euclidean group symmetry. We put forth a theory on that
unify prior work on discrete and continuous symmetry in reinforcement learning,
planning, and optimal control. Algorithm side, we further extend the 2D path
planning with value-based planning to continuous MDPs and propose a pipeline
for constructing equivariant sampling-based planning algorithms. Our work is
substantiated with empirical evidence and illustrated through examples that
explain the benefits of equivariance to Euclidean symmetry in tackling natural
control problems.
Related papers
- Approximate Equivariance in Reinforcement Learning [35.04248486334824]
Equivariant neural networks have shown great success in reinforcement learning.
In many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate.
We develop approximately equivariant algorithms in reinforcement learning.
arXiv Detail & Related papers (2024-11-06T19:44:46Z) - Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.
Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - Current Symmetry Group Equivariant Convolution Frameworks for Representation Learning [5.802794302956837]
Euclidean deep learning is often inadequate for addressing real-world signals where the representation space is irregular and curved with complex topologies.
We focus on the importance of symmetry group equivariant deep learning models and their realization of convolution-like operations on graphs, 3D shapes, and non-Euclidean spaces.
arXiv Detail & Related papers (2024-09-11T15:07:18Z) - Equivariant Ensembles and Regularization for Reinforcement Learning in Map-based Path Planning [5.69473229553916]
This paper proposes a method to construct equivariant policies and invariant value functions without specialized neural network components.
We show how equivariant ensembles and regularization benefit sample efficiency and performance.
arXiv Detail & Related papers (2024-03-19T16:01:25Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Integrating Symmetry into Differentiable Planning with Steerable
Convolutions [5.916280909373456]
Motivated by equivariant convolution networks, we treat the path planning problem as textitsignals over grids.
We show that value iteration in this case is a linear equivariant operator, which is a (steerable) convolution.
Our implementation is based on VINs and uses steerable convolution networks to incorporate symmetry.
arXiv Detail & Related papers (2022-06-08T04:58:48Z) - Geometric Methods for Sampling, Optimisation, Inference and Adaptive
Agents [102.42623636238399]
We identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.
We derive algorithms that exploit these geometric structures to solve these problems efficiently.
arXiv Detail & Related papers (2022-03-20T16:23:17Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.