Deep Neural Network in Cusp Catastrophe Model
- URL: http://arxiv.org/abs/2004.02359v2
- Date: Wed, 22 Apr 2020 02:30:54 GMT
- Title: Deep Neural Network in Cusp Catastrophe Model
- Authors: Ranadeep Daw, Zhuoqiong He
- Abstract summary: Catastrophe theory was originally proposed to dynamical systems that exhibit sudden shifts in behavior from small changes in input.
Here we show how a Catastrophe model can be trained to learn the dynamics of the Cusp Machine models, without really solving for the generating parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Catastrophe theory was originally proposed to study dynamical systems that
exhibit sudden shifts in behavior arising from small changes in input. These
models can generate reasonable explanation behind abrupt jumps in nonlinear
dynamic models. Among the different catastrophe models, the Cusp Catastrophe
model attracted the most attention due to it's relatively simpler dynamics and
rich domain of application. Due to the complex behavior of the response, the
parameter space becomes highly non-convex and hence it becomes very hard to
optimize to figure out the generating parameters. Instead of solving for these
generating parameters, we demonstrated how a Machine learning model can be
trained to learn the dynamics of the Cusp catastrophe models, without ever
really solving for the generating model parameters. Simulation studies and
application on a few famous datasets are used to validate our approach. To our
knowledge, this is the first paper of such kind where a neural network based
approach has been applied in Cusp Catastrophe model.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Enhancing Dynamical System Modeling through Interpretable Machine
Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition [0.8796261172196743]
We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems.
As a demonstrative application, we pursue the modeling of cathodic electrophoretic deposition (EPD), commonly known as e-coating.
arXiv Detail & Related papers (2024-01-16T14:58:21Z) - A Three-regime Model of Network Pruning [47.92525418773768]
We use temperature-like and load-like parameters to model the impact of neural network (NN) training hyper parameters on pruning performance.
A key empirical result we identify is a sharp transition phenomenon: depending on the value of a load-like parameter in the pruned model, increasing the value of a temperature-like parameter in the pre-pruned model may either enhance or impair subsequent pruning performance.
Our model reveals that the dichotomous effect of high temperature is associated with transitions between distinct types of global structures in the post-pruned model.
arXiv Detail & Related papers (2023-05-28T08:09:25Z) - Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks? [18.92828441607381]
We show that embeddings induce a model that fits observations well but simultaneously has incorrect dynamical behaviours.
We propose a simple embedding-free alternative based on parametrising two additive vector-field components.
arXiv Detail & Related papers (2023-05-20T12:41:47Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Automated Dissipation Control for Turbulence Simulation with Shell
Models [1.675857332621569]
The application of machine learning (ML) techniques, especially neural networks, has seen tremendous success at processing images and language.
In this work we construct a strongly simplified representation of turbulence by using the Gledzer-Ohkitani-Yamada shell model.
We propose an approach that aims to reconstruct statistical properties of turbulence such as the self-similar inertial-range scaling.
arXiv Detail & Related papers (2022-01-07T15:03:52Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.