Advanced Space Mapping Technique Integrating a Shared Coarse Model for Multistate Tuning-Driven Multiphysics Optimization of Tunable Filters
- URL: http://arxiv.org/abs/2507.14220v1
- Date: Wed, 16 Jul 2025 11:47:35 GMT
- Title: Advanced Space Mapping Technique Integrating a Shared Coarse Model for Multistate Tuning-Driven Multiphysics Optimization of Tunable Filters
- Authors: Haitian Hu, Wei Zhang, Feng Feng, Zhiguo Zhang, Qi-Jun Zhang,
- Abstract summary: This article introduces an advanced space mapping (SM) technique that applies a shared electromagnetic (EM)-based coarse model for multistate tuning-driven multiphysics optimization of tunable filters.<n>The SM method combines the computational efficiency of EM single-physics simulations with the precision of multiphysics simulations.<n>The proposed overall surrogate model comprises multiple subsurrogate models, each consisting of one shared coarse model and two distinct mapping neural networks.
- Score: 5.134282941677413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article introduces an advanced space mapping (SM) technique that applies a shared electromagnetic (EM)-based coarse model for multistate tuning-driven multiphysics optimization of tunable filters. The SM method combines the computational efficiency of EM single-physics simulations with the precision of multiphysics simulations. The shared coarse model is based on EM single-physics responses corresponding to various nontunable design parameters values. Conversely, the fine model is implemented to delineate the behavior of multiphysics responses concerning both nontunable and tunable design parameter values. The proposed overall surrogate model comprises multiple subsurrogate models, each consisting of one shared coarse model and two distinct mapping neural networks. The responses from the shared coarse model in the EM single-physics filed offer a suitable approximation for the fine responses in the multiphysics filed, whereas the mapping neural networks facilitate transition from the EM single-physics field to the multiphysics field. Each subsurrogate model maintains consistent nontunable design parameter values but possesses unique tunable design parameter values. By developing multiple subsurrogate models, optimization can be simultaneously performed for each tuning state. Nontunable design parameter values are constrained by all tuning states, whereas tunable design parameter values are confined to their respective tuning states. This optimization technique simultaneously accounts for all the tuning states to fulfill the necessary multiple tuning state requirements. Multiple EM and multiphysics training samples are generated concurrently to develop the surrogate model. Compared with existing direct multiphysics parameterized modeling techniques, our proposed method achieves superior multiphysics modeling accuracy with fewer training samples and reduced computational costs.
Related papers
- Model Assembly Learning with Heterogeneous Layer Weight Merging [57.8462476398611]
We introduce Model Assembly Learning (MAL), a novel paradigm for model merging.<n>MAL integrates parameters from diverse models in an open-ended model zoo to enhance the base model's capabilities.
arXiv Detail & Related papers (2025-03-27T16:21:53Z) - Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging [111.8456671452411]
Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer.
We propose a Weight-Ensembling Mixture of Experts (WEMoE) method for multi-task model merging.
We show that WEMoE and E-WEMoE outperform state-of-the-art (SOTA) model merging methods in terms of MTL performance, generalization, and robustness.
arXiv Detail & Related papers (2024-10-29T07:16:31Z) - Bayesian Structural Model Updating with Multimodal Variational Autoencoder [2.4297252937957436]
The proposed method utilizes the surrogate unimodal encoders of a multimodal variational autoencoder (VAE)
It is particularly suitable for high-dimensional correlated simultaneous observations applicable to various dynamic analysis models.
arXiv Detail & Related papers (2024-06-07T23:12:51Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Majority Kernels: An Approach to Leverage Big Model Dynamics for Efficient Small Model Training [32.154166415680066]
Methods like distillation, compression or quantization help leverage the highly performant large models to induce smaller performant ones.
This paper explores the hypothesis that a single training run can simultaneously train a larger model for performance and derive a smaller model for deployment.
arXiv Detail & Related papers (2024-02-07T17:07:41Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Variational Autoencoder based Metamodeling for Multi-Objective Topology
Optimization of Electrical Machines [0.0]
This paper presents a novel method for predicting Key Performance Indicators (KPIs) of differently parameterized electrical machine topologies at the same time.
After training, via a latent space, the decoder and multi-layer neural network will function as meta-models for sampling new designs and predicting associated topology, respectively.
arXiv Detail & Related papers (2022-01-21T19:49:54Z) - Multi-Objective Evolutionary Design of CompositeData-Driven Models [0.0]
The implemented approach is based on a parameter-free genetic algorithm for model design called GPComp@Free.
The experimental results confirm that a multi-objective approach to the model design allows achieving better diversity and quality of obtained models.
arXiv Detail & Related papers (2021-03-01T20:45:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.