Comparison of Neural Network based Soft Computing Techniques for
Electromagnetic Modeling of a Microstrip Patch Antenna
- URL: http://arxiv.org/abs/2109.10065v1
- Date: Tue, 21 Sep 2021 10:08:22 GMT
- Title: Comparison of Neural Network based Soft Computing Techniques for
Electromagnetic Modeling of a Microstrip Patch Antenna
- Authors: Yuvraj Singh Malhi and Navneet Gupta (Birla Institute of Technology
and Science, Pilani)
- Abstract summary: 22 different combinations of networks and training algorithms are used to predict the dimensions of a rectangular microstrip antenna.
It is observed that Reduced Radial Bias network is the most accurate network and Scaled Conjugate Gradient is the most reliable algorithm for electromagnetic modelling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the comparison of various neural networks and algorithms
based on accuracy, quickness, and consistency for antenna modelling. Using
Nntool by MATLAB, 22 different combinations of networks and training algorithms
are used to predict the dimensions of a rectangular microstrip antenna using
dielectric constant, height of substrate, and frequency of operation as input.
Comparison and characterization of networks is done based on accuracy, mean
square error, and training time. Algorithms, on the other hand, are analyzed by
their accuracy, speed, reliability, and smoothness in the training process.
Finally, these results are analyzed, and recommendations are made for each
neural network and algorithm based on uses, advantages, and disadvantages. For
example, it is observed that Reduced Radial Bias network is the most accurate
network and Scaled Conjugate Gradient is the most reliable algorithm for
electromagnetic modelling. This paper will help a researcher find the optimum
network and algorithm directly without doing time-taking experimentation.
Related papers
- Improving accuracy of tree-tensor network approach by optimization of network structure [0.0]
We analyze how detailed updating schemes in the structural optimization algorithm affect its computational accuracy.
We find that for the random XY-exchange model, on the one hand, the algorithm achieves improved accuracy, and the algorithm, which selects the local network structure, is notably effective.
arXiv Detail & Related papers (2025-01-26T13:11:30Z) - Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - An Improved Structured Mesh Generation Method Based on Physics-informed
Neural Networks [13.196871939441273]
As numerical algorithms become more efficient and computers become more powerful, the percentage of time devoted to mesh generation becomes higher.
In this paper, we present an improved structured mesh generation method.
The method formulates the meshing problem as a global optimization problem related to a physics-informed neural network.
arXiv Detail & Related papers (2022-10-18T02:45:14Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Accuracy of neural networks for the simulation of chaotic dynamics:
precision of training data vs precision of the algorithm [0.0]
We simulate the Lorenz system with different precisions using three different neural network techniques adapted to time series.
Our results show that the ESN network is better at predicting accurately the dynamics of the system.
arXiv Detail & Related papers (2020-07-08T17:25:37Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.