Optimizing Spectral Prediction in MXene-Based Metasurfaces Through Multi-Channel Spectral Refinement and Savitzky-Golay Smoothing
- URL: http://arxiv.org/abs/2602.08406v1
- Date: Mon, 09 Feb 2026 09:09:32 GMT
- Title: Optimizing Spectral Prediction in MXene-Based Metasurfaces Through Multi-Channel Spectral Refinement and Savitzky-Golay Smoothing
- Authors: Shujaat Khan, Waleed Iqbal Waseer,
- Abstract summary: The prediction of electromagnetic spectra for MXene-based solar absorbers is a computationally intensive task, traditionally addressed using full-wave solvers.<n>This study introduces an efficient deep learning framework incorporating transfer learning, multi-channel spectral refinement, and Savitzky-Golay smoothing.<n>The proposed framework presents a scalable and computationally efficient alternative to conventional solvers, positioning it as a viable candidate for rapid spectral prediction in nanophotonic design.
- Score: 2.9649783577150832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prediction of electromagnetic spectra for MXene-based solar absorbers is a computationally intensive task, traditionally addressed using full-wave solvers. This study introduces an efficient deep learning framework incorporating transfer learning, multi-channel spectral refinement (MCSR), and Savitzky-Golay smoothing to accelerate and enhance spectral prediction accuracy. The proposed architecture leverages a pretrained MobileNetV2 model, fine-tuned to predict 102-point absorption spectra from $64\times64$ metasurface designs. Additionally, the MCSR module processes the feature map through multi-channel convolutions, enhancing feature extraction, while Savitzky-Golay smoothing mitigates high-frequency noise. Experimental evaluations demonstrate that the proposed model significantly outperforms baseline Convolutional Neural Network (CNN) and deformable CNN models, achieving an average root mean squared error (RMSE) of 0.0245, coefficient of determination \( R^2 \) of 0.9578, and peak signal-to-noise ratio (PSNR) of 32.98 dB. The proposed framework presents a scalable and computationally efficient alternative to conventional solvers, positioning it as a viable candidate for rapid spectral prediction in nanophotonic design workflows.
Related papers
- Powering Up Zeroth-Order Training via Subspace Gradient Orthogonalization [40.95701844244596]
We show that ZO optimization can be substantially improved by unifying two complementary principles.<n>We instantiate in a new method, ZO-Muon, admitting a natural interpretation as a low-rank Muon in the ZO setting.
arXiv Detail & Related papers (2026-02-19T08:08:33Z) - Spectral Gating Networks [65.9496901693099]
We introduce Spectral Gating Networks (SGN) to introduce frequency-rich expressivity in feed-forward networks.<n>SGN augments a standard activation pathway with a compact spectral pathway and learnable gates that allow the model to start from a stable base behavior.<n>It consistently improves accuracy-efficiency trade-offs under comparable computational budgets.
arXiv Detail & Related papers (2026-02-07T20:00:49Z) - Structure-Informed Estimation for Pilot-Limited MIMO Channels via Tensor Decomposition [51.56484100374058]
This paper formulates pilot-limited channel estimation as low-rank tensor completion from sparse observations.<n>Experiments on synthetic channels demonstrate 10-20,dB normalized mean-square error (NMSE) improvement over least-squares (LS)<n> evaluations on DeepMIMO ray-tracing channels show 24-44% additional NMSE reduction over pure tensor-based methods.
arXiv Detail & Related papers (2026-02-03T23:38:05Z) - Compressed BC-LISTA via Low-Rank Convolutional Decomposition [47.15001096567547]
We study Sparse Signal Recovery (SSR) methods for multichannel imaging with compressed forward and backward operators.<n>We propose a Compressed Block-Convolutional (CBC) measurement model based on a low-rank Convolutional Network (CNN) decomposition.
arXiv Detail & Related papers (2026-01-30T16:33:51Z) - A Surrogate Model for the Forward Design of Multi-layered Metasurface-based Radar Absorbing Structures [3.328784252410173]
We propose a surrogate model that significantly accelerates the prediction of electromagnetic (EM) responses of multi-layered metasurface-based RAS.<n>The proposed model achieved a cosine similarity of 99.9% and a mean square error of 0.001 within 1000 epochs of training.
arXiv Detail & Related papers (2025-05-14T09:54:00Z) - SpectrumFM: A Foundation Model for Intelligent Spectrum Management [99.08036558911242]
Existing intelligent spectrum management methods, typically based on small-scale models, suffer from notable limitations in recognition accuracy, convergence speed, and generalization.<n>This paper proposes a novel spectrum foundation model, termed SpectrumFM, establishing a new paradigm for spectrum management.<n>Experiments demonstrate that SpectrumFM achieves superior performance in terms of accuracy, robustness, adaptability, few-shot learning efficiency, and convergence speed.
arXiv Detail & Related papers (2025-05-02T04:06:39Z) - Gradient Normalization Provably Benefits Nonconvex SGD under Heavy-Tailed Noise [60.92029979853314]
We investigate the roles of gradient normalization and clipping in ensuring the convergence of Gradient Descent (SGD) under heavy-tailed noise.
Our work provides the first theoretical evidence demonstrating the benefits of gradient normalization in SGD under heavy-tailed noise.
We introduce an accelerated SGD variant incorporating gradient normalization and clipping, further enhancing convergence rates under heavy-tailed noise.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - A Kaczmarz-inspired approach to accelerate the optimization of neural network wavefunctions [0.7438129207086058]
We propose the Subsampled Projected Gradient-Increment Natural Descent (SPRING) to reduce this bottleneck.
SPRING combines ideas from the recently introduced minimum-step reconfiguration (MinSR) and the classical randomized Kaczmarz method for solving linear least-squares problems.
We demonstrate that SPRING outperforms both MinSR and the popular Kronecker-Factored Approximate Curvature method (KFAC) across a number of small atoms and molecules.
arXiv Detail & Related papers (2024-01-18T18:23:10Z) - Machine learning for phase-resolved reconstruction of nonlinear ocean
wave surface elevations from sparse remote sensing data [37.69303106863453]
We propose a novel approach for phase-resolved wave surface reconstruction using neural networks.
Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids.
arXiv Detail & Related papers (2023-05-18T12:30:26Z) - On the optimization and pruning for Bayesian deep learning [1.0152838128195467]
We propose a new adaptive variational Bayesian algorithm to train neural networks on weight space.
The EM-MCMC algorithm allows us to perform optimization and model pruning within one-shot.
Our dense model can reach the state-of-the-art performance and our sparse model perform very well compared to previously proposed pruning schemes.
arXiv Detail & Related papers (2022-10-24T05:18:08Z) - SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with
Adaptive Noise Spectral Shaping [51.698273019061645]
SpecGrad adapts the diffusion noise so that its time-varying spectral envelope becomes close to the conditioning log-mel spectrogram.
It is processed in the time-frequency domain to keep the computational cost almost the same as the conventional DDPM-based neural vocoders.
arXiv Detail & Related papers (2022-03-31T02:08:27Z) - Rayleigh-Gauss-Newton optimization with enhanced sampling for
variational Monte Carlo [0.0]
We analyze optimization and sampling methods used in Variational Monte Carlo.
We introduce alterations to improve their performance.
In particular, we demonstrate that RGN can be made robust to energy spikes.
arXiv Detail & Related papers (2021-06-19T19:05:52Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.