Adding machine learning within Hamiltonians: Renormalization group
transformations, symmetry breaking and restoration
- URL: http://arxiv.org/abs/2010.00054v2
- Date: Wed, 10 Feb 2021 16:52:48 GMT
- Title: Adding machine learning within Hamiltonians: Renormalization group
transformations, symmetry breaking and restoration
- Authors: Dimitrios Bachtis, Gert Aarts, Biagio Lucini
- Abstract summary: We include the predictive function of a neural network, designed for phase classification, as a conjugate variable coupled to an external field within the Hamiltonian of a system.
Results show that the field can induce an order-disorder phase transition by breaking or restoring the symmetry.
We conclude by discussing how the method provides an essential step toward bridging machine learning and physics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a physical interpretation of machine learning functions, opening
up the possibility to control properties of statistical systems via the
inclusion of these functions in Hamiltonians. In particular, we include the
predictive function of a neural network, designed for phase classification, as
a conjugate variable coupled to an external field within the Hamiltonian of a
system. Results in the two-dimensional Ising model evidence that the field can
induce an order-disorder phase transition by breaking or restoring the
symmetry, in contrast with the field of the conventional order parameter which
causes explicit symmetry breaking. The critical behavior is then studied by
proposing a Hamiltonian-agnostic reweighting approach and forming a
renormalization group mapping on quantities derived from the neural network.
Accurate estimates of the critical point and of the critical exponents related
to the operators that govern the divergence of the correlation length are
provided. We conclude by discussing how the method provides an essential step
toward bridging machine learning and physics.
Related papers
- Learning Generalized Hamiltonians using fully Symplectic Mappings [0.32985979395737786]
Hamiltonian systems have the important property of being conservative, that is, energy is conserved throughout the evolution.
In particular Hamiltonian Neural Networks have emerged as a mechanism to incorporate structural inductive bias into the NN model.
We show that symplectic schemes are robust to noise and provide a good approximation of the system Hamiltonian when the state variables are sampled from a noisy observation.
arXiv Detail & Related papers (2024-09-17T12:45:49Z) - Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Sparse identification of quasipotentials via a combined data-driven method [4.599618895656792]
We leverage on machine learning via the combination of two data-driven techniques, namely a neural network and a sparse regression algorithm, to obtain symbolic expressions of quasipotential functions.
We show that our approach discovers a parsimonious quasipotential equation for an archetypal model with a known exact quasipotential and for the dynamics of a nanomechanical resonator.
arXiv Detail & Related papers (2024-07-06T11:27:52Z) - Coarse-Graining Hamiltonian Systems Using WSINDy [0.0]
We show that WSINDy can successfully identify a reduced Hamiltonian system in the presence of large intrinsics.
WSINDy naturally preserves the Hamiltonian structure by restricting to a trial basis of Hamiltonian vector fields.
We also provide a contribution to averaging theory by proving that first-order averaging at the level of vector fields preserves Hamiltonian structure in nearly-periodic Hamiltonian systems.
arXiv Detail & Related papers (2023-10-09T17:20:04Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Phenomenology of Double Descent in Finite-Width Neural Networks [29.119232922018732]
Double descent delineates the behaviour of models depending on the regime they belong to.
We use influence functions to derive suitable expressions of the population loss and its lower bound.
Building on our analysis, we investigate how the loss function affects double descent.
arXiv Detail & Related papers (2022-03-14T17:39:49Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Emergence of a finite-size-scaling function in the supervised learning
of the Ising phase transition [0.7658140759553149]
We investigate the connection between the supervised learning of the binary phase classification in the ferromagnetic Ising model and the standard finite-size-scaling theory of the second-order phase transition.
We show that just one free parameter is capable enough to describe the data-driven emergence of the universal finite-size-scaling function in the network output.
arXiv Detail & Related papers (2020-10-01T12:34:12Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.