A Hybrid Virtual Element Method and Deep Learning Approach for Solving One-Dimensional Euler-Bernoulli Beams
- URL: http://arxiv.org/abs/2501.06925v1
- Date: Sun, 12 Jan 2025 20:34:26 GMT
- Title: A Hybrid Virtual Element Method and Deep Learning Approach for Solving One-Dimensional Euler-Bernoulli Beams
- Authors: Paulo Akira F. Enabe, Rodrigo Provasi,
- Abstract summary: A hybrid framework integrating the Virtual Element Method (VEM) with deep learning is presented.<n>The primary aim is to explore a data-driven surrogate model capable of predicting fields across varying material displacement.<n>A neural network architecture is introduced to separately process nodal and material-specific data, effectively capturing complex interactions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A hybrid framework integrating the Virtual Element Method (VEM) with deep learning is presented as an initial step toward developing efficient and flexible numerical models for one-dimensional Euler-Bernoulli beams. The primary aim is to explore a data-driven surrogate model capable of predicting displacement fields across varying material and geometric parameters while maintaining computational efficiency. Building upon VEM's ability to handle higher-order polynomials and non-conforming discretizations, the method offers a robust numerical foundation for structural mechanics. A neural network architecture is introduced to separately process nodal and material-specific data, effectively capturing complex interactions with minimal reliance on large datasets. To address challenges in training, the model incorporates Sobolev training and GradNorm techniques, ensuring balanced loss contributions and enhanced generalization. While this framework is in its early stages, it demonstrates the potential for further refinement and development into a scalable alternative to traditional methods. The proposed approach lays the groundwork for advancing numerical and data-driven techniques in beam modeling, offering a foundation for future research in structural mechanics.
Related papers
- Manifold meta-learning for reduced-complexity neural system identification [1.0276024900942875]
We propose a meta-learning framework that discovers a low-dimensional manifold.
This manifold is learned from a meta-dataset of input-output sequences generated by a class of related dynamical systems.
Unlike bilevel meta-learning approaches, our method employs an auxiliary neural network to map datasets directly onto the learned manifold.
arXiv Detail & Related papers (2025-04-16T06:49:56Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Data-Driven Computing Methods for Nonlinear Physics Systems with Geometric Constraints [0.7252027234425334]
This paper introduces a novel, data-driven framework that synergizes physics-based priors with advanced machine learning techniques.
Our framework showcases four algorithms, each embedding a specific physics-based prior tailored to a particular class of nonlinear systems.
The integration of these priors also enhances the expressive power of neural networks, enabling them to capture complex patterns typical in physical phenomena.
arXiv Detail & Related papers (2024-06-20T23:10:41Z) - Scaling up Probabilistic PDE Simulators with Structured Volumetric Information [23.654711580674885]
We propose a framework combining a discretization scheme based on the popular Finite Volume Method with complementary numerical linear algebra techniques.
Experiments, including atemporal tsunami simulation, demonstrate substantially improved scaling behavior of this approach over previous collocation-based techniques.
arXiv Detail & Related papers (2024-06-07T15:38:27Z) - Building Flexible Machine Learning Models for Scientific Computing at Scale [35.41293100957156]
We present OmniArch, the first prototype aiming at solving multi-scale and multi-physics scientific computing problems with physical alignment.
As far as we know, we first conduct 1D-2D-3D united pre-training on the PDEBench, and it sets not only new performance benchmarks for 1D, 2D, and 3D PDEs but also demonstrates exceptional adaptability to new physics via in-context and zero-shot learning approaches.
arXiv Detail & Related papers (2024-02-25T07:19:01Z) - Generalizable data-driven turbulence closure modeling on unstructured grids with differentiable physics [1.8749305679160366]
We introduce a framework for embedding deep learning models within a generic finite element solver to solve the Navier-Stokes equations.
We validate our method for flow over a backwards-facing step and test its performance on novel geometries.
We show that our GNN-based closure model may be learned in a data-limited scenario by interpreting closure modeling as a solver-constrained optimization.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Differentiable modeling to unify machine learning and physical models
and advance Geosciences [38.92849886903847]
We outline the concepts, applicability, and significance of differentiable geoscientific modeling (DG)
"Differentiable" refers to accurately and efficiently calculating gradients with respect to model variables.
Preliminary evidence suggests DG offers better interpretability and causality than Machine Learning.
arXiv Detail & Related papers (2023-01-10T15:24:14Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Meshless physics-informed deep learning method for three-dimensional
solid mechanics [0.0]
Deep learning and the collocation method are merged and used to solve partial differential equations describing structures' deformation.
We consider different types of materials: linear elasticity, hyperelasticity (neo-Hookean) with large deformation, and von Mises plasticity with isotropic and kinematic hardening.
arXiv Detail & Related papers (2020-12-02T21:40:37Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.