Dimensionally Consistent Learning with Buckingham Pi
- URL: http://arxiv.org/abs/2202.04643v1
- Date: Wed, 9 Feb 2022 17:58:00 GMT
- Title: Dimensionally Consistent Learning with Buckingham Pi
- Authors: Joseph Bakarji, Jared Callaham, Steven L. Brunton, J. Nathan Kutz
- Abstract summary: In absence of governing equations, dimensional analysis is a robust technique for extracting insights and finding symmetries in physical systems.
We propose an automated approach using the symmetric and self-similar structure of available measurement data to discover dimensionless groups.
We develop three data-driven techniques that use the Buckingham Pi theorem as a constraint.
- Score: 4.446017969073817
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the absence of governing equations, dimensional analysis is a robust
technique for extracting insights and finding symmetries in physical systems.
Given measurement variables and parameters, the Buckingham Pi theorem provides
a procedure for finding a set of dimensionless groups that spans the solution
space, although this set is not unique. We propose an automated approach using
the symmetric and self-similar structure of available measurement data to
discover the dimensionless groups that best collapse this data to a lower
dimensional space according to an optimal fit. We develop three data-driven
techniques that use the Buckingham Pi theorem as a constraint: (i) a
constrained optimization problem with a non-parametric input-output fitting
function, (ii) a deep learning algorithm (BuckiNet) that projects the input
parameter space to a lower dimension in the first layer, and (iii) a technique
based on sparse identification of nonlinear dynamics (SINDy) to discover
dimensionless equations whose coefficients parameterize the dynamics. We
explore the accuracy, robustness and computational complexity of these methods
as applied to three example problems: a bead on a rotating hoop, a laminar
boundary layer, and Rayleigh-B\'enard convection.
Related papers
- Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - On Probabilistic Embeddings in Optimal Dimension Reduction [1.2085509610251701]
Dimension reduction algorithms are a crucial part of many data science pipelines.
Despite their wide utilization, many non-linear dimension reduction algorithms are poorly understood from a theoretical perspective.
arXiv Detail & Related papers (2024-08-05T12:46:21Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - An evaluation framework for dimensionality reduction through sectional
curvature [59.40521061783166]
In this work, we aim to introduce the first highly non-supervised dimensionality reduction performance metric.
To test its feasibility, this metric has been used to evaluate the performance of the most commonly used dimension reduction algorithms.
A new parameterized problem instance generator has been constructed in the form of a function generator.
arXiv Detail & Related papers (2023-03-17T11:59:33Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - A survey of unsupervised learning methods for high-dimensional
uncertainty quantification in black-box-type problems [0.0]
We construct surrogate models for quantification uncertainty (UQ) on complex partial differential equations (PPDEs)
The curse of dimensionality can be a pre-dimensional subspace used with suitable unsupervised learning techniques.
We demonstrate both the advantages and limitations of a suitable m-PCE model and we conclude that a suitable m-PCE model provides a cost-effective approach to deep subspaces.
arXiv Detail & Related papers (2022-02-09T16:33:40Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Nonlinear Level Set Learning for Function Approximation on Sparse Data
with Applications to Parametric Differential Equations [6.184270985214254]
"Nonlinear Level set Learning" (NLL) approach is presented for the pointwise prediction of functions which have been sparsely sampled.
The proposed algorithm effectively reduces the input dimension to the theoretical lower bound with minor accuracy loss.
Experiments and applications are presented which compare this modified NLL with the original NLL and the Active Subspaces (AS) method.
arXiv Detail & Related papers (2021-04-29T01:54:05Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.