Revisiting time-variant complex conjugate matrix equations with their corresponding real field time-variant large-scale linear equations, neural hypercomplex numbers space compressive approximation approach
- URL: http://arxiv.org/abs/2408.14057v1
- Date: Mon, 26 Aug 2024 07:33:45 GMT
- Title: Revisiting time-variant complex conjugate matrix equations with their corresponding real field time-variant large-scale linear equations, neural hypercomplex numbers space compressive approximation approach
- Authors: Jiakuang He, Dongqing Wu,
- Abstract summary: Time-variant complex conjugate matrix equations need to be transformed into corresponding real field time-variant large-scale linear equations.
In this paper, zeroing neural dynamic models based on complex field error (called Con-CZND1) and based on real field error (called Con-CZND2) are proposed.
Numerical experiments verify Con-CZND1 conj model effectiveness and highlight NHNSCAA importance.
- Score: 1.1970409518725493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale linear equations and high dimension have been hot topics in deep learning, machine learning, control,and scientific computing. Because of special conjugate operation characteristics, time-variant complex conjugate matrix equations need to be transformed into corresponding real field time-variant large-scale linear equations. In this paper, zeroing neural dynamic models based on complex field error (called Con-CZND1) and based on real field error (called Con-CZND2) are proposed for in-depth analysis. Con-CZND1 has fewer elements because of the direct processing of complex matrices. Con-CZND2 needs to be transformed into the real field, with more elements, and its performance is affected by the main diagonal dominance of coefficient matrices. A neural hypercomplex numbers space compressive approximation approach (NHNSCAA) is innovatively proposed. Then Con-CZND1 conj model is constructed. Numerical experiments verify Con-CZND1 conj model effectiveness and highlight NHNSCAA importance.
Related papers
- Discrete the solving model of time-variant standard Sylvester-conjugate matrix equations using Euler-forward formula [1.1970409518725493]
Time-variant Sylvester-conjugate matrix equations are presented as early time-variant versions of the complex conjugate matrix equations.
Current solving methods include Con-CZND1 and Con-CZND2 models, both of which use ode45 for continuous model.
Based on Euler-forward formula discretion, Con-DZND1-2i model and Con-DZND2-2i model are proposed.
arXiv Detail & Related papers (2024-11-04T17:58:31Z) - Zeroing neural dynamics solving time-variant complex conjugate matrix equation [1.1970409518725493]
Complex conjugate matrix equations (CCME) have aroused the interest of many researchers because of computations and antilinear systems.
Existing research is dominated by its time-invariant solving methods, but lacks proposed theories for solving its time-variant version.
In this paper, zeroing neural dynamics (ZND) is applied to solve its time-variant version.
arXiv Detail & Related papers (2024-06-18T16:50:26Z) - Quantum algorithms for linear and non-linear fractional
reaction-diffusion equations [3.409316136755434]
We investigate efficient quantum algorithms for nonlinear fractional reaction-diffusion equations with periodic boundary conditions.
We present a novel algorithm that combines the linear combination of Hamiltonian simulation technique with the interaction picture formalism.
arXiv Detail & Related papers (2023-10-29T04:48:20Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Surrogate Models for Optimization of Dynamical Systems [0.0]
This paper provides a smart data driven mechanism to construct low dimensional surrogate models.
These surrogate models reduce the computational time for solution of the complex optimization problems.
arXiv Detail & Related papers (2021-01-22T14:09:30Z) - A General Framework for Hypercomplex-valued Extreme Learning Machines [2.055949720959582]
This paper aims to establish a framework for extreme learning machines (ELMs) on general hypercomplex algebras.
We show a framework to operate in these algebras through real-valued linear algebra operations.
Experiments highlight the excellent performance of hypercomplex-valued ELMs to treat high-dimensional data.
arXiv Detail & Related papers (2021-01-15T15:22:05Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.