Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic
Extension
- URL: http://arxiv.org/abs/2309.02711v1
- Date: Wed, 6 Sep 2023 04:47:46 GMT
- Title: Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic
Extension
- Authors: Miguel Abreu, Luis Paulo Reis, Nuno Lau
- Abstract summary: We introduce Adaptive Symmetry (ASL) $x2013$ a model-minimization actor-critic extension that addresses incomplete symmetry.
ASL consists of symmetry fitting component and modular loss function that enforces a common relation across all states while adapting to the learned policy.
The results demonstrate that ASL is capable of recovering from large perturbations and generalizing to hidden symmetric states.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symmetry, a fundamental concept to understand our environment, often
oversimplifies reality from a mathematical perspective. Humans are a prime
example, deviating from perfect symmetry in terms of appearance and cognitive
biases (e.g. having a dominant hand). Nevertheless, our brain can easily
overcome these imperfections and efficiently adapt to symmetrical tasks. The
driving motivation behind this work lies in capturing this ability through
reinforcement learning. To this end, we introduce Adaptive Symmetry Learning
(ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that
addresses incomplete or inexact symmetry descriptions by adapting itself during
the learning process. ASL consists of a symmetry fitting component and a
modular loss function that enforces a common symmetric relation across all
states while adapting to the learned policy. The performance of ASL is compared
to existing symmetry-enhanced methods in a case study involving a four-legged
ant model for multidirectional locomotion tasks. The results demonstrate that
ASL is capable of recovering from large perturbations and generalizing
knowledge to hidden symmetric states. It achieves comparable or better
performance than alternative methods in most scenarios, making it a valuable
approach for leveraging model symmetry while compensating for inherent
perturbations.
Related papers
- Symmetry Considerations for Learning Task Symmetric Robot Policies [12.856889419651521]
Symmetry is a fundamental aspect of many real-world robotic tasks.
Current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively.
arXiv Detail & Related papers (2024-03-07T09:41:11Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Symmetry Induces Structure and Constraint of Learning [0.0]
We unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models.
Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry.
We show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks.
arXiv Detail & Related papers (2023-09-29T02:21:31Z) - Regularizing Towards Soft Equivariance Under Mixed Symmetries [23.603875905608565]
We present a regularizer-based method for building a model for a dataset with mixed approximate symmetries.
We show that our method achieves better accuracy than prior approaches while discovering the approximate symmetry levels correctly.
arXiv Detail & Related papers (2023-06-01T05:33:41Z) - Improved Representation of Asymmetrical Distances with Interval
Quasimetric Embeddings [45.69333765438636]
Asymmetrical distance structures (quasimetrics) are ubiquitous in our lives and are gaining more attention in machine learning applications.
We present four desirable properties in such quasimetric models, and show how prior works fail at them.
We propose Interval Quasimetric Embedding (IQE), which is designed to satisfy all four criteria.
arXiv Detail & Related papers (2022-11-28T08:22:26Z) - The Surprising Effectiveness of Equivariant Models in Domains with
Latent Symmetry [6.716931832076628]
We show that imposing symmetry constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment.
We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries both in supervised learning and in reinforcement learning for robotic manipulation and control problems.
arXiv Detail & Related papers (2022-11-16T21:51:55Z) - On the Importance of Asymmetry for Siamese Representation Learning [53.86929387179092]
Siamese networks are conceptually symmetric with two parallel encoders.
We study the importance of asymmetry by explicitly distinguishing the two encoders within the network.
We find the improvements from asymmetric designs generalize well to longer training schedules, multiple other frameworks and newer backbones.
arXiv Detail & Related papers (2022-04-01T17:57:24Z) - A Symmetric Loss Perspective of Reliable Machine Learning [87.68601212686086]
We review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization.
We demonstrate how the robust AUC method can benefit natural language processing in the problem where we want to learn only from relevant keywords.
arXiv Detail & Related papers (2021-01-05T06:25:47Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.