Improved Canonicalization for Model Agnostic Equivariance
- URL: http://arxiv.org/abs/2405.14089v2
- Date: Fri, 15 Nov 2024 16:08:55 GMT
- Title: Improved Canonicalization for Model Agnostic Equivariance
- Authors: Siba Smarak Panigrahi, Arnab Kumar Mondal,
- Abstract summary: Building equivariant models using traditional methods requires designing equivariant versions of existing models and training them from scratch.
We propose a new optimization-based method that employs any non-equivariant network for canonicalization.
Our method uses contrastive learning to efficiently learn a canonical orientation and offers more flexibility for the choice of canonicalization network.
- Score: 6.783232060611113
- License:
- Abstract: This work introduces a novel approach to achieving architecture-agnostic equivariance in deep learning, particularly addressing the limitations of traditional layerwise equivariant architectures and the inefficiencies of the existing architecture-agnostic methods. Building equivariant models using traditional methods requires designing equivariant versions of existing models and training them from scratch, a process that is both impractical and resource-intensive. Canonicalization has emerged as a promising alternative for inducing equivariance without altering model architecture, but it suffers from the need for highly expressive and expensive equivariant networks to learn canonical orientations accurately. We propose a new optimization-based method that employs any non-equivariant network for canonicalization. Our method uses contrastive learning to efficiently learn a canonical orientation and offers more flexibility for the choice of canonicalization network. We empirically demonstrate that this approach outperforms existing methods in achieving equivariance for large pretrained models and significantly speeds up the canonicalization process, making it up to 2 times faster.
Related papers
- Improving Equivariant Model Training via Constraint Relaxation [31.507956579770088]
Equivariant neural networks have been widely used in a variety of applications due to their ability to generalize well in tasks where the underlying data symmetries are known.
We propose a novel framework for improving the optimization of such models by relaxing the hard equivariance constraint during training.
We provide experimental results on different state-of-the-art network architectures, demonstrating how this training framework can result in equivariant models with improved generalization performance.
arXiv Detail & Related papers (2024-08-23T17:35:08Z) - Approximately Equivariant Neural Processes [47.14384085714576]
When modelling real-world data, learning problems are often not exactly equivariant, but only approximately.
Current approaches to achieving this cannot usually be applied out-of-the-box to any architecture and symmetry group.
We develop a general approach to achieving this using existing equivariant architectures.
arXiv Detail & Related papers (2024-06-19T12:17:14Z) - A Canonicalization Perspective on Invariant and Equivariant Learning [54.44572887716977]
We introduce a canonicalization perspective that provides an essential and complete view of the design of frames.
We show that there exists an inherent connection between frames and canonical forms.
We design novel frames for eigenvectors that are strictly superior to existing methods.
arXiv Detail & Related papers (2024-05-28T17:22:15Z) - Equivariant Adaptation of Large Pretrained Models [20.687626756753563]
We show that a canonicalization network can effectively be used to make a large pretrained network equivariant.
Using dataset-dependent priors to inform the canonicalization function, we are able to make large pretrained models equivariant while maintaining their performance.
arXiv Detail & Related papers (2023-10-02T21:21:28Z) - Multiplicative update rules for accelerating deep learning training and
increasing robustness [69.90473612073767]
We propose an optimization framework that fits to a wide range of machine learning algorithms and enables one to apply alternative update rules.
We claim that the proposed framework accelerates training, while leading to more robust models in contrast to traditionally used additive update rule.
arXiv Detail & Related papers (2023-07-14T06:44:43Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.