On the application of generative adversarial networks for nonlinear
modal analysis
- URL: http://arxiv.org/abs/2203.01229v1
- Date: Wed, 2 Mar 2022 16:46:41 GMT
- Title: On the application of generative adversarial networks for nonlinear
modal analysis
- Authors: G. Tsialiamanis, M.D. Champneys, N. Dervilis, D.J. Wagg, K. Worden
- Abstract summary: A machine learning scheme is proposed with a view to performing nonlinear modal analysis.
The scheme is focussed on defining a one-to-one mapping from a latent modal' space to the natural coordinate space.
The mapping is achieved via the use of the recently-developed cycle-consistent generative adversarial network (cycle-GAN) and an assembly of neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Linear modal analysis is a useful and effective tool for the design and
analysis of structures. However, a comprehensive basis for nonlinear modal
analysis remains to be developed. In the current work, a machine learning
scheme is proposed with a view to performing nonlinear modal analysis. The
scheme is focussed on defining a one-to-one mapping from a latent `modal' space
to the natural coordinate space, whilst also imposing orthogonality of the mode
shapes. The mapping is achieved via the use of the recently-developed
cycle-consistent generative adversarial network (cycle-GAN) and an assembly of
neural networks targeted on maintaining the desired orthogonality. The method
is tested on simulated data from structures with cubic nonlinearities and
different numbers of degrees of freedom, and also on data from an experimental
three-degree-of-freedom set-up with a column-bumper nonlinearity. The results
reveal the method's efficiency in separating the `modes'. The method also
provides a nonlinear superposition function, which in most cases has very good
accuracy.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Data-driven Nonlinear Parametric Model Order Reduction Framework using
Deep Hierarchical Variational Autoencoder [5.521324490427243]
Data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed.
LSH-VAE is capable of performing nonlinear MOR for the parametric of a nonlinear dynamic system with a significant number of degrees of freedom.
arXiv Detail & Related papers (2023-07-10T02:44:53Z) - On the Detection and Quantification of Nonlinearity via Statistics of
the Gradients of a Black-Box Model [0.0]
Detection and identification of nonlinearity is a task of high importance for structural dynamics.
A method to detect nonlinearity is proposed, based on the distribution of the gradients of a data-driven model.
arXiv Detail & Related papers (2023-02-15T23:15:22Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - A Latent Restoring Force Approach to Nonlinear System Identification [0.0]
This work suggests an approach based on Bayesian filtering to extract and identify the contribution of an unknown nonlinear term in the system.
The approach is demonstrated to be effective in both a simulated case study and on an experimental benchmark dataset.
arXiv Detail & Related papers (2021-09-22T12:21:16Z) - Towards extraction of orthogonal and parsimonious non-linear modes from
turbulent flows [0.0]
We propose a deep probabilistic-neural-network architecture for learning a minimal and near-orthogonal set of non-linear modes.
Our approach is based on $beta$-variational autoencoders ($beta$-VAEs) and convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-09-03T13:38:51Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.