Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic
Extension
- URL: http://arxiv.org/abs/2309.02711v1
- Date: Wed, 6 Sep 2023 04:47:46 GMT
- Title: Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic
Extension
- Authors: Miguel Abreu, Luis Paulo Reis, Nuno Lau
- Abstract summary: We introduce Adaptive Symmetry (ASL) $x2013$ a model-minimization actor-critic extension that addresses incomplete symmetry.
ASL consists of symmetry fitting component and modular loss function that enforces a common relation across all states while adapting to the learned policy.
The results demonstrate that ASL is capable of recovering from large perturbations and generalizing to hidden symmetric states.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symmetry, a fundamental concept to understand our environment, often
oversimplifies reality from a mathematical perspective. Humans are a prime
example, deviating from perfect symmetry in terms of appearance and cognitive
biases (e.g. having a dominant hand). Nevertheless, our brain can easily
overcome these imperfections and efficiently adapt to symmetrical tasks. The
driving motivation behind this work lies in capturing this ability through
reinforcement learning. To this end, we introduce Adaptive Symmetry Learning
(ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that
addresses incomplete or inexact symmetry descriptions by adapting itself during
the learning process. ASL consists of a symmetry fitting component and a
modular loss function that enforces a common symmetric relation across all
states while adapting to the learned policy. The performance of ASL is compared
to existing symmetry-enhanced methods in a case study involving a four-legged
ant model for multidirectional locomotion tasks. The results demonstrate that
ASL is capable of recovering from large perturbations and generalizing
knowledge to hidden symmetric states. It achieves comparable or better
performance than alternative methods in most scenarios, making it a valuable
approach for leveraging model symmetry while compensating for inherent
perturbations.
Related papers
- Understanding the Role of Equivariance in Self-supervised Learning [51.56331245499712]
equivariant self-supervised learning (E-SSL) learns features to be augmentation-aware.
We identify a critical explaining-away effect in E-SSL that creates a synergy between the equivariant and classification tasks.
We reveal several principles for practical designs of E-SSL.
arXiv Detail & Related papers (2024-11-10T16:09:47Z) - Approximate Equivariance in Reinforcement Learning [35.04248486334824]
Equivariant neural networks have shown great success in reinforcement learning.
In many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate.
We develop approximately equivariant algorithms in reinforcement learning.
arXiv Detail & Related papers (2024-11-06T19:44:46Z) - SymmetryLens: A new candidate paradigm for unsupervised symmetry learning via locality and equivariance [0.0]
We develop a new, unsupervised symmetry learning method that starts with raw data.
We demonstrate that this coupling between symmetry and locality, together with a special optimization technique developed for entropy estimation, results in a highly stable system.
The symmetry actions we consider are group representations, however, we believe the approach has the potential to be generalized to more general, nonlinear actions of non-commutative Lie groups.
arXiv Detail & Related papers (2024-10-07T17:40:51Z) - Symmetry Considerations for Learning Task Symmetric Robot Policies [12.856889419651521]
Symmetry is a fundamental aspect of many real-world robotic tasks.
Current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively.
arXiv Detail & Related papers (2024-03-07T09:41:11Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Symmetry Induces Structure and Constraint of Learning [0.0]
We unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models.
Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry.
We show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks.
arXiv Detail & Related papers (2023-09-29T02:21:31Z) - The Surprising Effectiveness of Equivariant Models in Domains with
Latent Symmetry [6.716931832076628]
We show that imposing symmetry constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment.
We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries both in supervised learning and in reinforcement learning for robotic manipulation and control problems.
arXiv Detail & Related papers (2022-11-16T21:51:55Z) - On the Importance of Asymmetry for Siamese Representation Learning [53.86929387179092]
Siamese networks are conceptually symmetric with two parallel encoders.
We study the importance of asymmetry by explicitly distinguishing the two encoders within the network.
We find the improvements from asymmetric designs generalize well to longer training schedules, multiple other frameworks and newer backbones.
arXiv Detail & Related papers (2022-04-01T17:57:24Z) - A Symmetric Loss Perspective of Reliable Machine Learning [87.68601212686086]
We review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization.
We demonstrate how the robust AUC method can benefit natural language processing in the problem where we want to learn only from relevant keywords.
arXiv Detail & Related papers (2021-01-05T06:25:47Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.