On the Parameter Combinations That Matter and on Those That do Not
- URL: http://arxiv.org/abs/2110.06717v1
- Date: Wed, 13 Oct 2021 13:46:23 GMT
- Title: On the Parameter Combinations That Matter and on Those That do Not
- Authors: Nikolaos Evangelou, Noah J. Wichrowski, George A. Kevrekidis, Felix
Dietrich, Mahdi Kooshkbaghi, Sarah McFann, Ioannis G. Kevrekidis
- Abstract summary: We present a data-driven approach to characterizing nonidentifiability of a model's parameters.
By employing Diffusion Maps and their extensions, we discover the minimal combinations of parameters required to characterize the dynamic output behavior.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a data-driven approach to characterizing nonidentifiability of a
model's parameters and illustrate it through dynamic kinetic models. By
employing Diffusion Maps and their extensions, we discover the minimal
combinations of parameters required to characterize the dynamic output
behavior: a set of effective parameters for the model. Furthermore, we use
Conformal Autoencoder Neural Networks, as well as a kernel-based Jointly Smooth
Function technique, to disentangle the redundant parameter combinations that do
not affect the output behavior from the ones that do. We discuss the
interpretability of our data-driven effective parameters and demonstrate the
utility of the approach both for behavior prediction and parameter estimation.
In the latter task, it becomes important to describe level sets in parameter
space that are consistent with a particular output behavior. We validate our
approach on a model of multisite phosphorylation, where a reduced set of
effective parameters, nonlinear combinations of the physical ones, has
previously been established analytically.
Related papers
- Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Overparameterized Multiple Linear Regression as Hyper-Curve Fitting [0.0]
It is proven that a linear model will produce exact predictions even in the presence of nonlinear dependencies that violate the model assumptions.
The hyper-curve approach is especially suited for the regularization of problems with noise in predictor variables and can be used to remove noisy and "improper" predictors from the model.
arXiv Detail & Related papers (2024-04-11T15:43:11Z) - A Unified Gaussian Process for Branching and Nested Hyperparameter
Optimization [19.351804144005744]
In deep learning, tuning parameters with conditional dependence are common in practice.
New GP model accounts for the dependent structure among input variables through a new kernel function.
High prediction accuracy and better optimization efficiency are observed in a series of synthetic simulations and real data applications of neural networks.
arXiv Detail & Related papers (2024-01-19T21:11:32Z) - Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model [81.55141188169621]
We equip PEFT with a cross-block orchestration mechanism to enable the adaptation of the Segment Anything Model (SAM) to various downstream scenarios.
We propose an intra-block enhancement module, which introduces a linear projection head whose weights are generated from a hyper-complex layer.
Our proposed approach consistently improves the segmentation performance significantly on novel scenarios with only around 1K additional parameters.
arXiv Detail & Related papers (2023-11-28T11:23:34Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Reduced order modeling of parametrized systems through autoencoders and
SINDy approach: continuation of periodic solutions [0.0]
This work presents a data-driven, non-intrusive framework which combines ROM construction with reduced dynamics identification.
The proposed approach leverages autoencoder neural networks with parametric sparse identification of nonlinear dynamics (SINDy) to construct a low-dimensional dynamical model.
These aim at tracking the evolution of periodic steady-state responses as functions of system parameters, avoiding the computation of the transient phase, and allowing to detect instabilities and bifurcations.
arXiv Detail & Related papers (2022-11-13T01:57:18Z) - On the Forward Invariance of Neural ODEs [92.07281135902922]
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications.
Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system.
arXiv Detail & Related papers (2022-10-10T15:18:28Z) - Accurate Parameter Estimation for Risk-aware Autonomous Systems [0.0]
This paper addresses the use of a spectral lines-based approach for estimating parameters of the dynamic model of an autonomous system.
Existing literature has treated all unmodeled components of the dynamic system as sub-Gaussian noise.
We show that the proposed approach can ensure a $tildeO(sqrtT)$ regret, matching existing literature.
arXiv Detail & Related papers (2020-06-23T01:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.