On the Detection and Quantification of Nonlinearity via Statistics of
the Gradients of a Black-Box Model
- URL: http://arxiv.org/abs/2302.07986v1
- Date: Wed, 15 Feb 2023 23:15:22 GMT
- Title: On the Detection and Quantification of Nonlinearity via Statistics of
the Gradients of a Black-Box Model
- Authors: G. Tsialiamanis, C.R. Farrar
- Abstract summary: Detection and identification of nonlinearity is a task of high importance for structural dynamics.
A method to detect nonlinearity is proposed, based on the distribution of the gradients of a data-driven model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detection and identification of nonlinearity is a task of high importance for
structural dynamics. Detecting nonlinearity in a structure, which has been
designed to operate in its linear region, might indicate the existence of
damage. Therefore, it is important, even for safety reasons, to detect when a
structure exhibits nonlinear behaviour. In the current work, a method to detect
nonlinearity is proposed, based on the distribution of the gradients of a
data-driven model, which is fitted on data acquired from the structure of
interest. The data-driven model herein is a neural network. The selection of
such a type of model was done in order to not allow the user to decide how
linear or nonlinear the model shall be, but to let the training algorithm of
the neural network shape the level of nonlinearity according to the training
data. The neural network is trained to predict the accelerations of the
structure for a time-instant using as inputs accelerations of previous
time-instants, i.e. one-step-ahead predictions. Afterwards, the gradients of
the output of the neural network with respect to its inputs are calculated.
Given that the structure is linear, the distribution of the aforementioned
gradients should be quite peaked, while in the case of a structure with
nonlinearities, the distribution of the gradients shall be more spread and,
potentially, multimodal. To test the above assumption, data from an
experimental structure are considered. The structure is tested under different
scenarios, some of which are linear and some nonlinear. The statistics of the
distributions of the gradients for the different scenarios can be used to
identify cases where nonlinearity is present. Moreover, via the proposed method
one is able to quantify the nonlinearity by observing higher values of standard
deviation of the distribution of the gradients for "more nonlinear" scenarios.
Related papers
- Koopman-based Deep Learning for Nonlinear System Estimation [1.3791394805787949]
We present a novel data-driven linear estimator based on Koopman operator theory to extract meaningful finite-dimensional representations of complex non-linear systems.
Our estimator is also adaptive to a diffeomorphic transformation of the estimated nonlinear system, which enables it to compute optimal state estimates without re-learning.
arXiv Detail & Related papers (2024-05-01T16:49:54Z) - Learning Linearized Models from Nonlinear Systems with Finite Data [1.6026317505839445]
We consider the problem of identifying a linearized model when the true underlying dynamics is nonlinear.
We provide a multiple trajectories-based deterministic data acquisition algorithm followed by a regularized least squares algorithm.
Our error bound demonstrates a trade-off between the error due to nonlinearity and the error due to noise, and shows that one can learn the linearized dynamics with arbitrarily small error.
arXiv Detail & Related papers (2023-09-15T22:58:03Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - On the application of generative adversarial networks for nonlinear
modal analysis [0.0]
A machine learning scheme is proposed with a view to performing nonlinear modal analysis.
The scheme is focussed on defining a one-to-one mapping from a latent modal' space to the natural coordinate space.
The mapping is achieved via the use of the recently-developed cycle-consistent generative adversarial network (cycle-GAN) and an assembly of neural networks.
arXiv Detail & Related papers (2022-03-02T16:46:41Z) - Learning Nonlinear Waves in Plasmon-induced Transparency [0.0]
We consider a recurrent neural network (RNN) approach to predict the complex propagation of nonlinear solitons in plasmon-induced transparency metamaterial systems.
We prove the prominent agreement of results in simulation and prediction by long short-term memory (LSTM) artificial neural networks.
arXiv Detail & Related papers (2021-07-31T21:21:44Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - DynNet: Physics-based neural architecture design for linear and
nonlinear structural response modeling and prediction [2.572404739180802]
In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems.
The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces.
arXiv Detail & Related papers (2020-07-03T17:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.