Hierarchical Dimensionless Learning (Hi-π): A physics-data hybrid-driven approach for discovering dimensionless parameter combinations
- URL: http://arxiv.org/abs/2507.18332v1
- Date: Thu, 24 Jul 2025 11:59:10 GMT
- Title: Hierarchical Dimensionless Learning (Hi-π): A physics-data hybrid-driven approach for discovering dimensionless parameter combinations
- Authors: Mingkun Xia, Haitao Lin, Weiwei Zhang,
- Abstract summary: We introduce Hierarchical Dimensionless Learning (Hi-pi), a physics-data hybrid-driven method that combines dimensional analysis and symbolic regression.<n>For the Rayleigh-B'enard convection, this method accurately extracted two intrinsic dimensionless parameters.<n>For the compressibility correction in subsonic flow, the method effectively extracts the classic compressibility correction formulation.
- Score: 10.007376792007518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dimensional analysis provides a universal framework for reducing physical complexity and reveal inherent laws. However, its application to high-dimensional systems still generates redundant dimensionless parameters, making it challenging to establish physically meaningful descriptions. Here, we introduce Hierarchical Dimensionless Learning (Hi-{\pi}), a physics-data hybrid-driven method that combines dimensional analysis and symbolic regression to automatically discover key dimensionless parameter combination(s). We applied this method to classic examples in various research fields of fluid mechanics. For the Rayleigh-B\'enard convection, this method accurately extracted two intrinsic dimensionless parameters: the Rayleigh number and the Prandtl number, validating its unified representation advantage across multiscale data. For the viscous flows in a circular pipe, the method automatically discovers two optimal dimensionless parameters: the Reynolds number and relative roughness, achieving a balance between accuracy and complexity. For the compressibility correction in subsonic flow, the method effectively extracts the classic compressibility correction formulation, while demonstrating its capability to discover hierarchical structural expressions through optimal parameter transformations.
Related papers
- Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.<n>To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.<n>We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - Data-free Weight Compress and Denoise for Large Language Models [96.68582094536032]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.<n>We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - A conservative hybrid physics-informed neural network method for
Maxwell-Amp\`{e}re-Nernst-Planck equations [22.81295238376119]
The proposed hybrid algorithm provides an automated means to determine a proper approximation for dummy variables.
The original method is validated for 2-dimensional problems.
The proposed method can be readily generalised to cases with one spatial dimension.
arXiv Detail & Related papers (2023-12-10T13:58:41Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - Deep learning extraction of band structure parameters from density of
states: a case study on trilayer graphene [45.61296767255256]
Key requirement for a comprehensive quantitative theory is the accurate determination of materials' band structure parameters.
We introduce a general framework to derive band structure parameters from experimental data using deep neural networks.
arXiv Detail & Related papers (2022-10-12T15:24:42Z) - Dimensionally Consistent Learning with Buckingham Pi [4.446017969073817]
In absence of governing equations, dimensional analysis is a robust technique for extracting insights and finding symmetries in physical systems.
We propose an automated approach using the symmetric and self-similar structure of available measurement data to discover dimensionless groups.
We develop three data-driven techniques that use the Buckingham Pi theorem as a constraint.
arXiv Detail & Related papers (2022-02-09T17:58:00Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Bayesian multiscale deep generative model for the solution of
high-dimensional inverse problems [0.0]
A novel multiscale Bayesian inference approach is introduced based on deep probabilistic generative models.
The method allows high-dimensional parameter estimation while exhibiting stability, efficiency and accuracy.
arXiv Detail & Related papers (2021-02-04T11:47:21Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.