Data-Driven Antenna Miniaturization: A Knowledge-Based System Integrating Quantum PSO and Predictive Machine Learning Models
- URL: http://arxiv.org/abs/2505.22440v1
- Date: Wed, 28 May 2025 15:04:36 GMT
- Title: Data-Driven Antenna Miniaturization: A Knowledge-Based System Integrating Quantum PSO and Predictive Machine Learning Models
- Authors: Khan Masood Parvez, Sk Md Abidar Rahaman, Ali Shiri Sichani,
- Abstract summary: This study integrates Quantum-Behaved Dynamic Particle Swarm Optimization with HFSS simulations to accelerate antenna design.<n>QDPSO algorithm autonomously optimized loop dimensions in 11.53 seconds, achieving a resonance frequency of 1.4208 GHz.<n>System enables precise specifications of performance targets with automated generation of fabrication-ready parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid evolution of wireless technologies necessitates automated design frameworks to address antenna miniaturization and performance optimization within constrained development cycles. This study demonstrates a machine learning enhanced workflow integrating Quantum-Behaved Dynamic Particle Swarm Optimization (QDPSO) with ANSYS HFSS simulations to accelerate antenna design. The QDPSO algorithm autonomously optimized loop dimensions in 11.53 seconds, achieving a resonance frequency of 1.4208 GHz a 12.7 percent reduction compared to conventional 1.60 GHz designs. Machine learning models (SVM, Random Forest, XGBoost, and Stacked ensembles) predicted resonance frequencies in 0.75 seconds using 936 simulation datasets, with stacked models showing superior training accuracy (R2=0.9825) and SVM demonstrating optimal validation performance (R2=0.7197). The complete design cycle, encompassing optimization, prediction, and ANSYS validation, required 12.42 minutes on standard desktop hardware (Intel i5-8500, 16GB RAM), contrasting sharply with the 50-hour benchmark of PSADEA-based approaches. This 240 times of acceleration eliminates traditional trial-and-error methods that often extend beyond seven expert-led days. The system enables precise specifications of performance targets with automated generation of fabrication-ready parameters, particularly benefiting compact consumer devices requiring rapid frequency tuning. By bridging AI-driven optimization with CAD validation, this framework reduces engineering workloads while ensuring production-ready designs, establishing a scalable paradigm for next-generation RF systems in 6G and IoT applications.
Related papers
- Accelerating RF Power Amplifier Design via Intelligent Sampling and ML-Based Parameter Tuning [0.0]
This paper presents a machine learning-accelerated optimization framework for RF power amplifier design.<n>It reduces simulation requirements by 65% while maintaining $pm0.4$ dBm accuracy for the majority of modes.<n>The integrated solution delivers 58.24% to 77.78% reduction in simulation time through automated GUI-based iterations.
arXiv Detail & Related papers (2025-07-16T05:52:24Z) - Research on Low-Latency Inference and Training Efficiency Optimization for Graph Neural Network and Large Language Model-Based Recommendation Systems [4.633338944734091]
This study considers computational bottlenecks involved in hybrid Graph Neural Network (GNN) and Large Language Model (LLM)-based recommender systems (ReS)<n>It recommends the use of FPGA as well as LoRA for real-time deployment.
arXiv Detail & Related papers (2025-06-21T03:10:50Z) - Development_of_a_novel_high-performance_balanced_homodyne_detector [4.304487989897226]
This work presents the first application of optimization simulation methodology to balanced homodyne detector (BHD) circuit design.<n>The AC amplifier circuit was implemented through ADS high-frequency simulations using two ABA-52563 RF amplifiers in a cascaded configuration.<n>The implemented system demonstrates a collective generation rate of 20.0504 Gbps across four parallel channels, with all output streams successfully passing NIST SP 800-22 statistical testing requirements.
arXiv Detail & Related papers (2025-03-05T09:24:59Z) - Reservoir Network with Structural Plasticity for Human Activity Recognition [2.355460994057843]
Echo state network (ESN) is a class of recurrent neural networks that can be used to identify unique patterns in time-series data and predict future events.<n>In this work, a custom-design neuromorphic chip based on ESN targeting edge devices is proposed.<n>The proposed system supports various learning mechanisms, including structural plasticity and synaptic plasticity, locally on-chip.
arXiv Detail & Related papers (2025-03-01T07:57:22Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization [37.35829410807451]
This paper introduces PeriodWave-Turbo, a high-fidelity and high-efficient waveform generation model via adversarial flow matching optimization.
It only requires 1,000 steps of fine-tuning to achieve state-of-the-art performance across various objective metrics.
By scaling up the backbone of PeriodWave from 29M to 70M parameters for improved generalization, PeriodWave-Turbo achieves unprecedented performance.
arXiv Detail & Related papers (2024-08-15T08:34:00Z) - Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers [56.37495946212932]
Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs)
This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs.
arXiv Detail & Related papers (2024-07-25T16:35:46Z) - DiffNAS: Bootstrapping Diffusion Models by Prompting for Better
Architectures [63.12993314908957]
We propose a base model search approach, denoted "DiffNAS"
We leverage GPT-4 as a supernet to expedite the search, supplemented with a search memory to enhance the results.
Rigorous experimentation corroborates that our algorithm can augment the search efficiency by 2 times under GPT-based scenarios.
arXiv Detail & Related papers (2023-10-07T09:10:28Z) - LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy
Physics [45.666822327616046]
This work presents a novel reconfigurable architecture for Low Graph Neural Network (LL-GNN) designs for particle detectors.
The LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.
arXiv Detail & Related papers (2022-09-28T12:55:35Z) - VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit
Vision Transformer [121.85581713299918]
We propose VAQF, a framework that builds inference accelerators on FPGA platforms for quantized Vision Transformers (ViTs)
Given the model structure and the desired frame rate, VAQF will automatically output the required quantization precision for activations.
This is the first time quantization has been incorporated into ViT acceleration on FPGAs.
arXiv Detail & Related papers (2022-01-17T20:27:52Z) - Data-Driven Offline Optimization For Architecting Hardware Accelerators [89.68870139177785]
We develop a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME.
PRIME improves performance upon state-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively.
In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.
arXiv Detail & Related papers (2021-10-20T17:06:09Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.