Optimizing Likelihood-free Inference using Self-supervised Neural
Symmetry Embeddings
- URL: http://arxiv.org/abs/2312.07615v1
- Date: Mon, 11 Dec 2023 21:06:07 GMT
- Title: Optimizing Likelihood-free Inference using Self-supervised Neural
Symmetry Embeddings
- Authors: Deep Chatterjee, Philip C. Harris, Maanas Goel, Malina Desai, Michael
W. Coughlin and Erik Katsavounidis
- Abstract summary: We show a technique of optimizing likelihood-free inference to make it even faster by marginalizing symmetries in a physical problem.
We present this approach on two simple physical problems and we show faster convergence in a smaller number of parameters compared to a normalizing flow.
- Score: 0.24084786718197512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Likelihood-free inference is quickly emerging as a powerful tool to perform
fast/effective parameter estimation. We demonstrate a technique of optimizing
likelihood-free inference to make it even faster by marginalizing symmetries in
a physical problem. In this approach, physical symmetries, for example,
time-translation are learned using joint-embedding via self-supervised learning
with symmetry data augmentations. Subsequently, parameter inference is
performed using a normalizing flow where the embedding network is used to
summarize the data before conditioning the parameters. We present this approach
on two simple physical problems and we show faster convergence in a smaller
number of parameters compared to a normalizing flow that does not use a
pre-trained symmetry-informed representation.
Related papers
- Learning (Approximately) Equivariant Networks via Constrained Optimization [25.51476313302483]
Equivariant neural networks are designed to respect symmetries through their architecture.<n>Real-world data often departs from perfect symmetry because of noise, structural variation, measurement bias, or other symmetry-breaking effects.<n>We introduce Adaptive Constrained Equivariance (ACE), a constrained optimization approach that starts with a flexible, non-equivariant model.
arXiv Detail & Related papers (2025-05-19T18:08:09Z) - Learning Broken Symmetries with Approximate Invariance [1.0485739694839669]
In many cases, the exact underlying symmetry is present only in an idealized dataset, and is broken in actual data.
Standard approaches, such as data augmentation or equivariant networks fail to represent the nature of the full, broken symmetry.
We propose a learning model which balances the generality and performance of unconstrained networks with the rapid learning of constrained networks.
arXiv Detail & Related papers (2024-12-25T04:29:04Z) - Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.
Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - SymmetryLens: Unsupervised Symmetry Learning via Locality and Density Preservation [0.0]
We develop a new unsupervised symmetry learning method that starts with raw data and provides the minimal generator of an underlying Lie group of symmetries.<n>The method is able to learn the pixel translation operator from a dataset with only an approximate translation symmetry.<n>We demonstrate that this coupling between symmetry and locality, together with an optimization technique developed for entropy estimation, results in a stable system.
arXiv Detail & Related papers (2024-10-07T17:40:51Z) - The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof [50.49582712378289]
We investigate the impact of neural parameter symmetries by introducing new neural network architectures.
We develop two methods, with some provable guarantees, of modifying standard neural networks to reduce parameter space symmetries.
Our experiments reveal several interesting observations on the empirical impact of parameter symmetries.
arXiv Detail & Related papers (2024-05-30T16:32:31Z) - First-principles construction of symmetry-informed quantum metrologies [0.0]
We develop a class of measurement strategies for quantities isomorphic to location parameters.
The resulting framework admits any parameter range, prior information, or state.
It reduces the search for good strategies to identifying which symmetry leaves a state of maximum ignorance invariant.
arXiv Detail & Related papers (2024-02-26T09:06:37Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - The Surprising Effectiveness of Equivariant Models in Domains with
Latent Symmetry [6.716931832076628]
We show that imposing symmetry constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment.
We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries both in supervised learning and in reinforcement learning for robotic manipulation and control problems.
arXiv Detail & Related papers (2022-11-16T21:51:55Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.