Neural Network-Based Parameter Estimation for Non-Autonomous Differential Equations with Discontinuous Signals
- URL: http://arxiv.org/abs/2507.06267v1
- Date: Tue, 08 Jul 2025 00:42:42 GMT
- Title: Neural Network-Based Parameter Estimation for Non-Autonomous Differential Equations with Discontinuous Signals
- Authors: Hyeontae Jo, Krešimir Josić, Jae Kyoung Kim,
- Abstract summary: We propose a novel parameter estimation method utilizing functional approximations with artificial neural networks.<n>Our approach, termed Harmonic Approximation of Discontinuous External Signals using Neural Networks (HADES-NN), operates in two iterated stages.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-autonomous differential equations are crucial for modeling systems influenced by external signals, yet fitting these models to data becomes particularly challenging when the signals change abruptly. To address this problem, we propose a novel parameter estimation method utilizing functional approximations with artificial neural networks. Our approach, termed Harmonic Approximation of Discontinuous External Signals using Neural Networks (HADES-NN), operates in two iterated stages. In the first stage, the algorithm employs a neural network to approximate the discontinuous signal with a smooth function. In the second stage, it uses this smooth approximate signal to estimate model parameters. HADES-NN gives highly accurate and precise parameter estimates across various applications, including circadian clock systems regulated by external light inputs measured via wearable devices and the mating response of yeast to external pheromone signals. HADES-NN greatly extends the range of model systems that can be fit to real-world measurements.
Related papers
- Bayesian Model Parameter Learning in Linear Inverse Problems: Application in EEG Focal Source Imaging [49.1574468325115]
Inverse problems can be described as limited-data problems in which the signal of interest cannot be observed directly.<n>We studied a linear inverse problem that included an unknown non-linear model parameter.<n>We utilized a Bayesian model-based learning approach that allowed signal recovery and subsequently estimation of the model parameter.
arXiv Detail & Related papers (2025-01-07T18:14:24Z) - Reproduction of AdEx dynamics on neuromorphic hardware through data embedding and simulation-based inference [0.8437187555622164]
We use an autoencoder to automatically extract relevant features from the membrane trace of a complex neuron model.<n>We then leverage sequential neural posterior estimation to approximate the posterior distribution of neuron parameters.<n>This suggests that the combination of an autoencoder with the SNPE algorithm is a promising optimization method for complex systems.
arXiv Detail & Related papers (2024-12-03T13:19:21Z) - Nonuniform random feature models using derivative information [10.239175197655266]
We propose nonuniform data-driven parameter distributions for neural network initialization based on derivative data of the function to be approximated.
We address the cases of Heaviside and ReLU activation functions, and their smooth approximations (sigmoid and softplus)
We suggest simplifications of these exact densities based on approximate derivative data in the input points that allow for very efficient sampling and lead to performance of random feature models close to optimal networks in several scenarios.
arXiv Detail & Related papers (2024-10-03T01:30:13Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.