Intrinsic Dimensionality of Fermi-Pasta-Ulam-Tsingou High-Dimensional Trajectories Through Manifold Learning
- URL: http://arxiv.org/abs/2411.02058v1
- Date: Mon, 04 Nov 2024 13:01:13 GMT
- Title: Intrinsic Dimensionality of Fermi-Pasta-Ulam-Tsingou High-Dimensional Trajectories Through Manifold Learning
- Authors: Gionni Marchetti,
- Abstract summary: A data-driven approach is proposed to infer the intrinsic dimensions $mast$ of the high-dimensional trajectories of the Fermi-Pasta-Ulam-Tsingou model.
We find strong evidence suggesting that the datapoints lie near or on a curved low-dimensional manifold for weak nonlinearities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A data-driven approach based on unsupervised machine learning is proposed to infer the intrinsic dimensions $m^{\ast}$ of the high-dimensional trajectories of the Fermi-Pasta-Ulam-Tsingou (FPUT) model. Principal component analysis (PCA) is applied to trajectory data consisting of $n_s = 4,000,000$ datapoints, of the FPUT $\beta$ model with $N = 32$ coupled oscillators, revealing a critical relationship between $m^{\ast}$ and the model's nonlinear strength. For weak nonlinearities, $m^{\ast} \ll n$, where $n = 2N$. In contrast, for strong nonlinearities, $m^{\ast} \rightarrow n - 1$, consistently with the ergodic hypothesis. Furthermore, one of the potential limitations of PCA is addressed through an analysis with t-distributed stochastic neighbor embedding ($t$-SNE). Accordingly, we found strong evidence suggesting that the datapoints lie near or on a curved low-dimensional manifold for weak nonlinearities.
Related papers
- Field digitization scaling in a $\mathbb{Z}_N \subset U(1)$ symmetric model [0.0]
We propose to analyze field digitization by interpreting the parameter $N$ as a coupling in the renormalization group sense.<n>Using effective field theory, we derive generalized scaling hypotheses involving the FD parameter $N$.<n>We analytically prove that our calculations for the 2D classical-statistical $mathbbZ_N$ clock model are directly related to the quantum physics in the ground state of a (2+1)D $mathbbZ_N$ lattice gauge theory.
arXiv Detail & Related papers (2025-07-30T18:00:02Z) - Random Matrix Theory for Deep Learning: Beyond Eigenvalues of Linear Models [51.85815025140659]
Modern Machine Learning (ML) and Deep Neural Networks (DNNs) often operate on high-dimensional data.<n>In particular, the proportional regime where the data dimension, sample size, and number of model parameters are all large gives rise to novel and sometimes counterintuitive behaviors.<n>This paper extends traditional Random Matrix Theory (RMT) beyond eigenvalue-based analysis of linear models to address the challenges posed by nonlinear ML models.
arXiv Detail & Related papers (2025-06-16T06:54:08Z) - Robustness of Nonlinear Representation Learning [60.15898117103069]
We study the problem of unsupervised representation learning in slightly misspecified settings.
We show that the mixing can be identified up to linear transformations and small errors.
Those results are a step towards identifiability results for unsupervised representation learning for real-world data.
arXiv Detail & Related papers (2025-03-19T15:57:03Z) - Large N vector models in the Hamiltonian framework [0.0]
We first present the method in the simpler setting of a quantum mechanical system with quartic interactions.<n>We then apply these techniques to the $O(N)$ model in $2+1$ and $3+1$ dimensions.<n>We recover various known results, such as the gap equation determining the ground state of the system.
arXiv Detail & Related papers (2025-02-12T00:18:02Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise [59.25598762373543]
We show that wetailed high-prob convergence guarantees of learning on streaming data in the presence of heavy-tailed noise.
We demonstrate analytically and that $ta$ can be used to the preferred choice of setting for a given problem.
arXiv Detail & Related papers (2023-10-28T18:53:41Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - High-Dimensional Smoothed Entropy Estimation via Dimensionality
Reduction [14.53979700025531]
We consider the estimation of the differential entropy $h(X+Z)$ via $n$ independently and identically distributed samples of $X$.
Under the absolute-error loss, the above problem has a parametric estimation rate of $fraccDsqrtn$.
We overcome this exponential sample complexity by projecting $X$ to a low-dimensional space via principal component analysis (PCA) before the entropy estimation.
arXiv Detail & Related papers (2023-05-08T13:51:48Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Non-Iterative Recovery from Nonlinear Observations using Generative
Models [14.772379476524407]
We assume that the signal lies in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
Our reconstruction method is non-iterative (though approximating the projection step may use an iterative procedure) and highly efficient.
arXiv Detail & Related papers (2022-05-31T12:34:40Z) - Generative Principal Component Analysis [47.03792476688768]
We study the problem of principal component analysis with generative modeling assumptions.
Key assumption is that the underlying signal lies near the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
We propose a quadratic estimator, and show that it enjoys a statistical rate of order $sqrtfracklog Lm$, where $m$ is the number of samples.
arXiv Detail & Related papers (2022-03-18T01:48:16Z) - Single Trajectory Nonparametric Learning of Nonlinear Dynamics [8.438421942654292]
Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE)
We leverage recently developed information-theoretic methods to establish the optimality of the LSE for non hypotheses classes.
We specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS)
arXiv Detail & Related papers (2022-02-16T19:38:54Z) - Fast quantum state discrimination with nonlinear PTP channels [0.0]
We investigate models of nonlinear quantum computation based on deterministic positive trace-preserving (PTP) channels and evolution equations.
PTP channels support Bloch ball torsion and other distortions studied previously, where it has been shown that such nonlinearity can be used to increase the separation between a pair of close qubit states.
We argue that this operation can be made robust to noise by using dissipation to induce a bifurcation to a novel phase where a pair of attracting fixed points create an intrinsically fault-tolerant nonlinear state discriminator.
arXiv Detail & Related papers (2021-11-10T22:42:37Z) - Tensor network simulation of the (1+1)-dimensional $O(3)$ nonlinear
$\sigma$-model with $\ heta=\pi$ term [17.494746371461694]
We perform a tensor network simulation of the (1+1)-dimensional $O(3)$ nonlinear $sigma$-model with $theta=pi$ term.
Within the Hamiltonian formulation, this field theory emerges as the finite-temperature partition function of a modified quantum rotor model decorated with magnetic monopoles.
arXiv Detail & Related papers (2021-09-23T12:17:31Z) - Robust Online Control with Model Misspecification [96.23493624553998]
We study online control of an unknown nonlinear dynamical system with model misspecification.
Our study focuses on robustness, which measures how much deviation from the assumed linear approximation can be tolerated.
arXiv Detail & Related papers (2021-07-16T07:04:35Z) - Fundamental tradeoffs between memorization and robustness in random
features and neural tangent regimes [15.76663241036412]
We prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded.
Experiments reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
arXiv Detail & Related papers (2021-06-04T17:52:50Z) - Existence of the first magic angle for the chiral model of bilayer
graphene [77.34726150561087]
Tarnopolsky-Kruchkov-Vishwanath (TKV) have proved that for inverse twist angles $alpha$ the effective Fermi velocity at the moir'e $K$ point vanishes.
We give a proof that the Fermi velocity vanishes for at least one $alpha$ between $alpha approx.586$.
arXiv Detail & Related papers (2021-04-13T20:37:00Z) - Quantum metrology with precision reaching beyond-$1/N$ scaling through
$N$-probe entanglement generating interactions [12.257762263903317]
We propose a quantum measurement scenario based on the nonlinear interaction of $N$-probe entanglement generating form.
This scenario provides an enhanced precision scaling of $D-N/(N-1)!$ with $D > 2$ a tunable parameter.
arXiv Detail & Related papers (2021-02-14T05:50:05Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.