Data-Driven Dynamic Friction Models based on Recurrent Neural Networks
- URL: http://arxiv.org/abs/2402.14148v5
- Date: Fri, 23 Aug 2024 20:08:25 GMT
- Title: Data-Driven Dynamic Friction Models based on Recurrent Neural Networks
- Authors: Joaquin Garcia-Suarez,
- Abstract summary: Recurrent Neural Networks (RNNs) based on Gated Recurrent Unit (GRU) architecture, learn complex dynamics of rate-and-state friction laws from synthetic data.
It is found that the GRU-based RNNs effectively learns to predict changes in the friction coefficient resulting from velocity jumps.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this letter, it is demonstrated that Recurrent Neural Networks (RNNs) based on Gated Recurrent Unit (GRU) architecture, possess the capability to learn the complex dynamics of rate-and-state friction (RSF) laws from synthetic data. The data employed for training the network is generated through the application of traditional RSF equations coupled with either the aging law or the slip law for state evolution. A novel aspect of this approach is the formulation of a loss function that explicitly accounts for the direct effect by means of automatic differentiation. It is found that the GRU-based RNNs effectively learns to predict changes in the friction coefficient resulting from velocity jumps (with and without noise in the target data), thereby showcasing the potential of machine learning models in capturing and simulating the physics of frictional processes. Current limitations and challenges are discussed.
Related papers
- Knowledge-Based Convolutional Neural Network for the Simulation and Prediction of Two-Phase Darcy Flows [3.5707423185282656]
Physics-informed neural networks (PINNs) have gained significant prominence as a powerful tool in the field of scientific computing and simulations.
We propose to combine the power of neural networks with the dynamics imposed by the discretized differential equations.
By discretizing the governing equations, the PINN learns to account for the discontinuities and accurately capture the underlying relationships between inputs and outputs.
arXiv Detail & Related papers (2024-04-04T06:56:32Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Physics-Informed Deep Learning of Rate-and-State Fault Friction [0.0]
We develop a multi-network PINN for both the forward problem and for direct inversion of nonlinear fault friction parameters.
We present the computational PINN framework for strike-slip faults in 1D and 2D subject to rate-and-state friction.
We find that the network for the parameter inversion at the fault performs much better than the network for material displacements to which it is coupled.
arXiv Detail & Related papers (2023-12-14T23:53:25Z) - Learning-based adaption of robotic friction models [48.453527255659296]
We introduce a novel approach to adapt an existing friction model to new dynamics using as little data as possible.
Our proposed estimator outperforms the conventional model-based approach and the base neural network significantly.
Our method does not rely on data with external load during training, eliminating the need for external torque sensors.
arXiv Detail & Related papers (2023-10-25T14:50:15Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Physics Constrained Flow Neural Network for Short-Timescale Predictions
in Data Communications Networks [31.85361736992165]
This paper introduces Flow Neural Network (FlowNN) to improve the feature representation with learned physical bias.
FlowNN achieves 17% - 71% of loss decrease than the state-of-the-art baselines on both synthetic and real-world networking datasets.
arXiv Detail & Related papers (2021-12-23T02:41:00Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Thermodynamics-based Artificial Neural Networks for constitutive
modeling [0.0]
We propose a new class of data-driven, physics-based, neural networks for modeling of strain rate independent processes at the material point level.
The two basic principles of thermodynamics are encoded in the network's architecture by taking advantage of automatic differentiation.
We demonstrate the wide applicability of TANNs for modeling elasto-plastic materials, with strain hardening and softening strain.
arXiv Detail & Related papers (2020-05-25T15:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.