HomoGenius: a Foundation Model of Homogenization for Rapid Prediction of Effective Mechanical Properties using Neural Operators
- URL: http://arxiv.org/abs/2404.07943v1
- Date: Mon, 18 Mar 2024 06:47:35 GMT
- Title: HomoGenius: a Foundation Model of Homogenization for Rapid Prediction of Effective Mechanical Properties using Neural Operators
- Authors: Yizheng Wang, Xiang Li, Ziming Yan, Yuqing Du, Jinshuai Bai, Bokai Liu, Timon Rabczuk, Yinghua Liu,
- Abstract summary: Homogenization is an essential tool for studying multiscale physical phenomena.
We propose a numerical homogenization model based on operator learning: HomoGenius.
The proposed model can quickly provide homogenization results for arbitrary geometries, materials, and resolutions.
- Score: 12.845932824311182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homogenization is an essential tool for studying multiscale physical phenomena. However, traditional numerical homogenization, heavily reliant on finite element analysis, requires extensive computation costs, particularly in handling complex geometries, materials, and high-resolution problems. To address these limitations, we propose a numerical homogenization model based on operator learning: HomoGenius. The proposed model can quickly provide homogenization results for arbitrary geometries, materials, and resolutions, increasing the efficiency by a factor of 80 compared to traditional numerical homogenization methods. We validate effectiveness of our model in predicting the effective elastic modulus on periodic materials (TPMS: Triply Periodic Minimal Surface), including complex geometries, various Poisson's ratios and elastic modulus, and different resolutions for training and testing. The results show that our model possesses high precision, super efficiency, and learning capability.
Related papers
- OmniFluids: Unified Physics Pre-trained Modeling of Fluid Dynamics [25.066485418709114]
We introduce OmniFluids, a unified physics pre-trained operator learning framework.<n>It integrates physics-only pre-training, coarse-grid operator distillation, and few-shot fine-tuning.<n>It significantly outperforms state-of-the-art AI-driven methods in flow field reconstruction and turbulence statistics accuracy.
arXiv Detail & Related papers (2025-06-12T16:23:02Z) - Implicit Neural Differential Model for Spatiotemporal Dynamics [5.1854032131971195]
We introduce Im-PiNDiff, a novel implicit physics-integrated neural differentiable solver for stabletemporal dynamics.
Inspired by deep equilibrium models, Im-PiNDiff advances the state using implicit fixed-point layers, enabling robust long-term simulation.
Im-PiNDiff achieves superior predictive performance, enhanced numerical stability, and substantial reductions in memory and cost.
arXiv Detail & Related papers (2025-04-03T04:07:18Z) - Non-asymptotic Convergence of Training Transformers for Next-token Prediction [48.9399496805422]
Transformers have achieved extraordinary success in modern machine learning due to their excellent ability to handle sequential data.
This paper provides a fine-grained non-asymptotic analysis of the training dynamics of a one-layer transformer.
We show that the trained transformer presents non-token prediction ability with dataset shift.
arXiv Detail & Related papers (2024-09-25T20:22:06Z) - Coupling Machine Learning Local Predictions with a Computational Fluid Dynamics Solver to Accelerate Transient Buoyant Plume Simulations [0.0]
This study presents a versatile and scalable hybrid methodology, combining CFD and machine learning.
The objective was to leverage local features to predict the temporal changes in the pressure field in comparable scenarios.
Pressure estimates were employed as initial values to accelerate the pressure-velocity coupling procedure.
arXiv Detail & Related papers (2024-09-11T10:38:30Z) - FFT-based surrogate modeling of auxetic metamaterials with real-time prediction of effective elastic properties and swift inverse design [1.3980986259786223]
Auxetic structures exhibit effective elastic properties heavily influenced by their underlying structural geometry and base material properties.
periodic homogenization of auxetic unit cells can be used to investigate these properties, but it is computationally expensive and limits design space exploration.
This paper develops surrogate models for the real-time prediction of the effective elastic properties of auxetic unit cells.
arXiv Detail & Related papers (2024-08-24T09:20:33Z) - Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics [21.05257407408671]
We propose a general formulation for continuous convolutions using separable basis functions as a superset of existing methods.
We demonstrate that even and odd symmetries included in the basis functions are key aspects of stability and accuracy.
arXiv Detail & Related papers (2024-03-25T12:15:47Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - A hybrid quantum-classical fusion neural network to improve protein-ligand binding affinity predictions for drug discovery [0.0]
This paper introduces a novel hybrid quantum-classical deep learning model tailored for binding affinity prediction in drug discovery.
Specifically, the proposed model synergistically integrates 3D and spatial graph convolutional neural networks within an optimized quantum architecture.
Simulation results demonstrate a 6% improvement in prediction accuracy relative to existing classical models, as well as a significantly more stable convergence performance compared to previous classical approaches.
arXiv Detail & Related papers (2023-09-06T11:56:33Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Pre-training via Denoising for Molecular Property Prediction [53.409242538744444]
We describe a pre-training technique that utilizes large datasets of 3D molecular structures at equilibrium.
Inspired by recent advances in noise regularization, our pre-training objective is based on denoising.
arXiv Detail & Related papers (2022-05-31T22:28:34Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Hybridized Methods for Quantum Simulation in the Interaction Picture [69.02115180674885]
We provide a framework that allows different simulation methods to be hybridized and thereby improve performance for interaction picture simulations.
Physical applications of these hybridized methods yield a gate complexity scaling as $log2 Lambda$ in the electric cutoff.
For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter $lambda$ used to impose an energy cost.
arXiv Detail & Related papers (2021-09-07T20:01:22Z) - Thermodynamics-based Artificial Neural Networks (TANN) for multiscale
modeling of materials with inelastic microstructure [0.0]
Multiscale, homogenization approaches are often used for performing reliable, accurate predictions of the macroscopic mechanical behavior of inelastic materials.
Data-driven approaches based on deep learning have risen as a promising alternative to replace ad-hoc laws and speed-up numerical methods.
Here, we propose Thermodynamics-based Artificial Neural Networks (TANN) for the modeling of mechanical materials with inelastic and complex microstructure.
arXiv Detail & Related papers (2021-08-30T11:50:38Z) - A data-driven peridynamic continuum model for upscaling molecular
dynamics [3.1196544696082613]
We propose a learning framework to extract, from molecular dynamics data, an optimal Linear Peridynamic Solid model.
We provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions.
This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training.
arXiv Detail & Related papers (2021-08-04T07:07:47Z) - Polyconvex anisotropic hyperelasticity with neural networks [1.7616042687330642]
convex machine learning based models for finite deformations are proposed.
The models are calibrated with highly challenging simulation data of cubic lattice metamaterials.
The data for the data approach is based on mechanical considerations and does not require additional experimental or simulation capabilities.
arXiv Detail & Related papers (2021-06-20T15:33:31Z) - EBM-Fold: Fully-Differentiable Protein Folding Powered by Energy-based
Models [53.17320541056843]
We propose a fully-differentiable approach for protein structure optimization, guided by a data-driven generative network.
Our EBM-Fold approach can efficiently produce high-quality decoys, compared against traditional Rosetta-based structure optimization routines.
arXiv Detail & Related papers (2021-05-11T03:40:29Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.