Symmetry Detection in Trajectory Data for More Meaningful Reinforcement
Learning Representations
- URL: http://arxiv.org/abs/2211.16381v1
- Date: Tue, 29 Nov 2022 17:00:26 GMT
- Title: Symmetry Detection in Trajectory Data for More Meaningful Reinforcement
Learning Representations
- Authors: Marissa D'Alonzo and Rebecca Russell
- Abstract summary: We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system.
We show in experiments on two simulated RL use cases that our method can determine the symmetries underlying both the environment physics and the trained RL policy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge of the symmetries of reinforcement learning (RL) systems can be
used to create compressed and semantically meaningful representations of a
low-level state space. We present a method of automatically detecting RL
symmetries directly from raw trajectory data without requiring active control
of the system. Our method generates candidate symmetries and trains a recurrent
neural network (RNN) to discriminate between the original trajectories and the
transformed trajectories for each candidate symmetry. The RNN discriminator's
accuracy for each candidate reveals how symmetric the system is under that
transformation. This information can be used to create high-level
representations that are invariant to all symmetries on a dataset level and to
communicate properties of the RL behavior to users. We show in experiments on
two simulated RL use cases (a pusher robot and a UAV flying in wind) that our
method can determine the symmetries underlying both the environment physics and
the trained RL policy.
Related papers
- Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.
Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof [50.49582712378289]
We investigate the impact of neural parameter symmetries by introducing new neural network architectures.
We develop two methods, with some provable guarantees, of modifying standard neural networks to reduce parameter space symmetries.
Our experiments reveal several interesting observations on the empirical impact of parameter symmetries.
arXiv Detail & Related papers (2024-05-30T16:32:31Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Learning Radio Environments by Differentiable Ray Tracing [56.40113938833999]
We introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns.
We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.
arXiv Detail & Related papers (2023-11-30T13:50:21Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Semi-Supervised Offline Reinforcement Learning with Action-Free
Trajectories [37.14064734165109]
Natural agents can learn from multiple data sources that differ in size, quality, and types of measurements.
We study this in the context of offline reinforcement learning (RL) by introducing a new, practically motivated semi-supervised setting.
arXiv Detail & Related papers (2022-10-12T18:22:23Z) - LieGG: Studying Learned Lie Group Generators [1.5293427903448025]
Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them.
We present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them.
arXiv Detail & Related papers (2022-10-09T20:42:37Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Koopman Q-learning: Offline Reinforcement Learning via Symmetries of
Dynamics [29.219095364935885]
offline reinforcement learning leverages large datasets to train policies without interactions with the environment.
Current algorithms over-fit to the training dataset and perform poorly when deployed to out-of-distribution generalizations of the environment.
We learn a Koopman latent representation which allows us to infer symmetries of the system's underlying dynamic.
We empirically evaluate our method on several benchmark offline reinforcement learning tasks and datasets including D4RL, Metaworld and Robosuite.
arXiv Detail & Related papers (2021-11-02T04:32:18Z) - Detecting Symmetries with Neural Networks [0.0]
We make extensive use of the structure in the embedding layer of the neural network.
We identify whether a symmetry is present and to identify orbits of the symmetry in the input.
For this example we present a novel data representation in terms of graphs.
arXiv Detail & Related papers (2020-03-30T17:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.