Computational modelling and data-driven homogenisation of knitted
membranes
- URL: http://arxiv.org/abs/2107.05707v1
- Date: Mon, 12 Jul 2021 19:51:02 GMT
- Title: Computational modelling and data-driven homogenisation of knitted
membranes
- Authors: Sumudu Herath, Xiao Xiao and Fehmi Cirak
- Abstract summary: yarn-level modelling of large-scale knitted membranes is not feasible.
We consider a two-scale homogenisation approach and model the membrane as a Kirchhoff-Love shell on the macroscale and as Euler-Bernoulli rods on the microscale.
The solution of the nonlinear microscale problem requires a significant amount of time due to the large deformations and the enforcement of contact constraints.
- Score: 0.7530103765625609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knitting is an effective technique for producing complex three-dimensional
surfaces owing to the inherent flexibility of interlooped yarns and recent
advances in manufacturing providing better control of local stitch patterns.
Fully yarn-level modelling of large-scale knitted membranes is not feasible.
Therefore, we consider a two-scale homogenisation approach and model the
membrane as a Kirchhoff-Love shell on the macroscale and as Euler-Bernoulli
rods on the microscale. The governing equations for both the shell and the rod
are discretised with cubic B-spline basis functions. The solution of the
nonlinear microscale problem requires a significant amount of time due to the
large deformations and the enforcement of contact constraints, rendering
conventional online computational homogenisation approaches infeasible. To
sidestep this problem, we use a pre-trained statistical Gaussian Process
Regression (GPR) model to map the macroscale deformations to macroscale
stresses. During the offline learning phase, the GPR model is trained by
solving the microscale problem for a sufficiently rich set of deformation
states obtained by either uniform or Sobol sampling. The trained GPR model
encodes the nonlinearities and anisotropies present in the microscale and
serves as a material model for the macroscale Kirchhoff-Love shell. After
verifying and validating the different components of the proposed approach, we
introduce several examples involving membranes subjected to tension and shear
to demonstrate its versatility and good performance.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - A Microstructure-based Graph Neural Network for Accelerating Multiscale
Simulations [0.0]
We introduce an alternative surrogate modeling strategy that allows for keeping the multiscale nature of the problem.
We achieve this by predicting full-field microscopic strains using a graph neural network (GNN) while retaining the microscopic material model.
We demonstrate for several challenging scenarios that the surrogate can predict complex macroscopic stress-strain paths.
arXiv Detail & Related papers (2024-02-20T15:54:24Z) - DiffusionPCR: Diffusion Models for Robust Multi-Step Point Cloud
Registration [73.37538551605712]
Point Cloud Registration (PCR) estimates the relative rigid transformation between two point clouds.
We propose formulating PCR as a denoising diffusion probabilistic process, mapping noisy transformations to the ground truth.
Our experiments showcase the effectiveness of our DiffusionPCR, yielding state-of-the-art registration recall rates (95.3%/81.6%) on 3D and 3DLoMatch.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Conditional Korhunen-Lo\'{e}ve regression model with Basis Adaptation
for high-dimensional problems: uncertainty quantification and inverse
modeling [62.997667081978825]
We propose a methodology for improving the accuracy of surrogate models of the observable response of physical systems.
We apply the proposed methodology to constructing surrogate models via the Basis Adaptation (BA) method of the stationary hydraulic head response.
arXiv Detail & Related papers (2023-07-05T18:14:38Z) - Multielement polynomial chaos Kriging-based metamodelling for Bayesian
inference of non-smooth systems [0.0]
This paper presents a surrogate modelling technique based on domain partitioning for Bayesian parameter inference of highly nonlinear engineering models.
The developed surrogate model combines in a piecewise function an array of local Polynomial Chaos based Kriging metamodels constructed on a finite set of non-overlapping of the input space.
The efficiency and accuracy of the proposed approach are validated through two case studies, including an analytical benchmark and a numerical case study.
arXiv Detail & Related papers (2022-12-05T13:22:39Z) - Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference [9.468270453795409]
We study the doubly formulation of the BayesianVM model amenable with minibatch training.
We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models.
We demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions.
arXiv Detail & Related papers (2022-02-25T21:21:51Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Ensemble Learning of Coarse-Grained Molecular Dynamics Force Fields with
a Kernel Approach [2.562811344441631]
Gradient-domain machine learning (GDML) is an accurate and efficient approach to learn a molecular potential and associated force field.
We demonstrate its application to learn an effective coarse-grained (CG) model from all-atom simulation data.
Using ensemble learning and stratified sampling, we propose a data-efficient and memory-saving alternative.
arXiv Detail & Related papers (2020-05-04T21:20:01Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.