Using Neural Networks to Model Hysteretic Kinematics in Tendon-Actuated Continuum Robots
- URL: http://arxiv.org/abs/2404.07168v1
- Date: Wed, 10 Apr 2024 17:04:06 GMT
- Title: Using Neural Networks to Model Hysteretic Kinematics in Tendon-Actuated Continuum Robots
- Authors: Yuan Wang, Max McCandless, Abdulhamit Donder, Giovanni Pittiglio, Behnam Moradkhani, Yash Chitalia, Pierre E. Dupont,
- Abstract summary: We investigate the hysteretic response of two types of tendon-actuated continuum robots.
We compare three types of neural network modeling approaches with both forward and inverse kinematic mappings.
- Score: 13.390354219940583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to accurately model mechanical hysteretic behavior in tendon-actuated continuum robots using deep learning approaches is a growing area of interest. In this paper, we investigate the hysteretic response of two types of tendon-actuated continuum robots and, ultimately, compare three types of neural network modeling approaches with both forward and inverse kinematic mappings: feedforward neural network (FNN), FNN with a history input buffer, and long short-term memory (LSTM) network. We seek to determine which model best captures temporal dependent behavior. We find that, depending on the robot's design, choosing different kinematic inputs can alter whether hysteresis is exhibited by the system. Furthermore, we present the results of the model fittings, revealing that, in contrast to the standard FNN, both FNN with a history input buffer and the LSTM model exhibit the capacity to model historical dependence with comparable performance in capturing rate-dependent hysteresis.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Unsupervised Spiking Neural Network Model of Prefrontal Cortex to study
Task Switching with Synaptic deficiency [0.0]
We build a computational model of Prefrontal Cortex (PFC) using Spiking Neural Networks (SNN)
In this study, we use SNN's having parameters close to biologically plausible values and train the model using unsupervised Spike Timing Dependent Plasticity (STDP) learning rule.
arXiv Detail & Related papers (2023-05-23T05:59:54Z) - Forecasting the 2016-2017 Central Apennines Earthquake Sequence with a
Neural Point Process [0.0]
We investigate whether flexible point process models can be applied to short-term seismicity forecasting.
We show how a temporal neural model can forecast earthquakes above a target magnitude threshold.
arXiv Detail & Related papers (2023-01-24T12:15:12Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - A Spiking Central Pattern Generator for the control of a simulated
lamprey robot running on SpiNNaker and Loihi neuromorphic boards [1.8139771201780368]
We propose a spiking neural network and its implementation on neuromorphic hardware as a means to control a simulated lamprey model.
We show that by modifying the input to the network, which can be provided by sensory information, the robot can be controlled dynamically in direction and pace.
This category of spiking algorithms shows a promising potential to exploit the theoretical advantages of neuromorphic hardware in terms of energy efficiency and computational speed.
arXiv Detail & Related papers (2021-01-18T11:04:16Z) - Action-Conditional Recurrent Kalman Networks For Forward and Inverse
Dynamics Learning [17.80270555749689]
Estimating accurate forward and inverse dynamics models is a crucial component of model-based control for robots.
We present two architectures for forward model learning and one for inverse model learning.
Both architectures significantly outperform exist-ing model learning frameworks as well as analytical models in terms of prediction performance.
arXiv Detail & Related papers (2020-10-20T11:28:25Z) - Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine
Learning [0.0]
We train a neural network to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours.
Inspection of the model's results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications.
A new method is proposed to measure whether two time series are shifted in time with respect to each other.
arXiv Detail & Related papers (2020-06-08T15:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.