MMGP: a Mesh Morphing Gaussian Process-based machine learning method for
regression of physical problems under non-parameterized geometrical
variability
- URL: http://arxiv.org/abs/2305.12871v2
- Date: Sun, 22 Oct 2023 14:36:46 GMT
- Title: MMGP: a Mesh Morphing Gaussian Process-based machine learning method for
regression of physical problems under non-parameterized geometrical
variability
- Authors: Fabien Casenave, Brian Staber and Xavier Roynard
- Abstract summary: We propose a machine learning method that do not rely on graph neural networks.
The proposed methodology can easily deal with large meshes without the need for explicit shape parameterization.
- Score: 0.30693357740321775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When learning simulations for modeling physical phenomena in industrial
designs, geometrical variabilities are of prime interest. While classical
regression techniques prove effective for parameterized geometries, practical
scenarios often involve the absence of shape parametrization during the
inference stage, leaving us with only mesh discretizations as available data.
Learning simulations from such mesh-based representations poses significant
challenges, with recent advances relying heavily on deep graph neural networks
to overcome the limitations of conventional machine learning approaches.
Despite their promising results, graph neural networks exhibit certain
drawbacks, including their dependency on extensive datasets and limitations in
providing built-in predictive uncertainties or handling large meshes. In this
work, we propose a machine learning method that do not rely on graph neural
networks. Complex geometrical shapes and variations with fixed topology are
dealt with using well-known mesh morphing onto a common support, combined with
classical dimensionality reduction techniques and Gaussian processes. The
proposed methodology can easily deal with large meshes without the need for
explicit shape parameterization and provides crucial predictive uncertainties,
which are essential for informed decision-making. In the considered numerical
experiments, the proposed method is competitive with respect to existing graph
neural networks, regarding training efficiency and accuracy of the predictions.
Related papers
- Neural Incremental Data Assimilation [8.817223931520381]
We introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network.
This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error.
We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.
arXiv Detail & Related papers (2024-06-21T11:42:55Z) - Data-Driven Computing Methods for Nonlinear Physics Systems with Geometric Constraints [0.7252027234425334]
This paper introduces a novel, data-driven framework that synergizes physics-based priors with advanced machine learning techniques.
Our framework showcases four algorithms, each embedding a specific physics-based prior tailored to a particular class of nonlinear systems.
The integration of these priors also enhances the expressive power of neural networks, enabling them to capture complex patterns typical in physical phenomena.
arXiv Detail & Related papers (2024-06-20T23:10:41Z) - Iterative Sizing Field Prediction for Adaptive Mesh Generation From Expert Demonstrations [49.173541207550485]
Adaptive Meshing By Expert Reconstruction (AMBER) is an imitation learning problem.
AMBER combines a graph neural network with an online data acquisition scheme to predict the projected sizing field of an expert mesh.
We experimentally validate AMBER on 2D meshes and 3D meshes provided by a human expert, closely matching the provided demonstrations and outperforming a single-step CNN baseline.
arXiv Detail & Related papers (2024-06-20T10:01:22Z) - A hybrid numerical methodology coupling Reduced Order Modeling and Graph Neural Networks for non-parametric geometries: applications to structural dynamics problems [0.0]
This work introduces a new approach for accelerating the numerical analysis of time-domain partial differential equations (PDEs) governing complex physical systems.
The methodology is based on a combination of a classical reduced-order modeling (ROM) framework and recently-parametric Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2024-06-03T08:51:25Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Deep Learning-based surrogate models for parametrized PDEs: handling
geometric variability through graph neural networks [0.0]
This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs.
We propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme.
We show that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios.
arXiv Detail & Related papers (2023-08-03T08:14:28Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.