Quantum Fourier Transform Based Kernel for Solar Irrandiance Forecasting
- URL: http://arxiv.org/abs/2511.17698v1
- Date: Fri, 21 Nov 2025 18:36:25 GMT
- Title: Quantum Fourier Transform Based Kernel for Solar Irrandiance Forecasting
- Authors: Nawfel Mechiche-Alami, Eduardo Rodriguez, Jose M. Cardemil, Enrique Lopez Droguett,
- Abstract summary: This study proposes a Quantum Fourier Transform (QFT)-enhanced quantum kernel for short-term time-series forecasting.<n>Each signal is windowed, amplitude-encoded, transformed by a QFT, then passed through a protective rotation layer to avoid the QFT/QFT adjoint cancellation.<n>Exogenous predictors are incorporated by convexly fusing feature-specific kernels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study proposes a Quantum Fourier Transform (QFT)-enhanced quantum kernel for short-term time-series forecasting. Each signal is windowed, amplitude-encoded, transformed by a QFT, then passed through a protective rotation layer to avoid the QFT/QFT adjoint cancellation; the resulting kernel is used in kernel ridge regression (KRR). Exogenous predictors are incorporated by convexly fusing feature-specific kernels. On multi-station solar irradiance data across Koppen climate classes, the proposed kernel consistently improves median R2 and nRMSE over reference classical RBF and polynomials kernels, while also reducing bias (nMBE); complementary MAE/ERMAX analyses indicate tighter average errors with remaining headroom under sharp transients. For both quantum and classical models, the only tuned quantities are the feature-mixing weights and the KRR ridge alpha; classical hyperparameters (gamma, r, d) are fixed, with the same validation set size for all models. Experiments are conducted on a noiseless simulator (5 qubits; window length L=32). Limitations and ablations are discussed, and paths toward NISQ execution are outlined.
Related papers
- FUTON: Fourier Tensor Network for Implicit Neural Representations [56.48739018255443]
Implicit neural representations (INRs) have emerged as powerful tools for encoding signals, yet dominant-based designs often suffer from slow convergence, overfitting to noise, and poor extrapolation.<n>We introduce FUTON, which models signals as generalized Fourier series whose coefficients are parameterized by a low-rank tensor decomposition.
arXiv Detail & Related papers (2026-02-13T19:31:44Z) - Kernel Learning for Regression via Quantum Annealing Based Spectral Sampling [0.7734726150561088]
We propose a QA-in-the-loop kernel learning framework that integrates QA not merely as a substitute for Markov-chain Monte Carlo sampling.<n>We construct a data-adaptive kernel and perform Nadaraya--Watson (NW) regression.<n>Experiments on multiple benchmark regression datasets demonstrate a decrease in training loss, accompanied by structural changes in the kernel matrix.
arXiv Detail & Related papers (2026-01-13T16:50:07Z) - Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Kernelized Decoded Quantum Interferometry [0.0]
We introduce textbf Kernelized Decoded Quantum Interferometry (k-DQI), a unified framework that integrates spectral engineering directly into the quantum circuit architecture.<n>We prove a textbfMonotonic Improvement Theorem, which establishes that maximizing $_K$ guarantees higher decoding success rates under local depolarizing noise.
arXiv Detail & Related papers (2025-11-25T07:35:03Z) - Spectral Bias Mitigation via xLSTM-PINN: Memory-Gated Representation Refinement for Physics-Informed Learning [6.546212906401042]
We introduce a representation-level spectral remodeling xLSTM-PINN to curb spectral bias and strengthen extrapolation.<n>Across four benchmarks, we integrate gated cross-scale memory, a staged frequency curriculum, and adaptive residual reweighting.<n>Compared with the baseline PINN, we reduce MSE, RMSE, MAE, and MaxAE across all four benchmarks and deliver cleaner boundary transitions.
arXiv Detail & Related papers (2025-11-16T08:55:27Z) - Universality and kernel-adaptive training for classically trained, quantum-deployed generative models [7.192684088403013]
The instantaneous quantum (IQP) quantum circuit Born machine (QCBM) has been proposed as a promising quantum generative model over bitstrings.<n>Recent works have shown that the training of IQP-QCBM is classically tractable w.r.t. the so-called Gaussian kernel maximum mean discrepancy (MMD) loss function.<n>We show that in the kernel-adaptive method, the convergence of the MMD value implies weak convergence in distribution of the generator.
arXiv Detail & Related papers (2025-10-09T17:17:34Z) - The GINN framework: a stochastic QED correspondence for stability and chaos in deep neural networks [0.0]
We develop a Euclidean field-theoretic approach that maps deep neural networks (DNNs) to quantum electrodynamics (QED)<n> Neural activations and weights are represented by fermionic matter and gauge fields.<n>We validate the theoretical predictions through numerical simulations of standard multilayer perceptrons.
arXiv Detail & Related papers (2025-08-26T11:41:11Z) - Kernel-based dequantization of variational QML without Random Fourier Features [0.3277163122167433]
Recent proposals toward dequantizing variational QML models for regression problems include approaches based on kernel methods with carefully chosen kernel functions.<n>We show that for a wide range of instances, this approach can be simplified.<n>Our results enhance the toolkit for kernel-based dequantization of variational QML.
arXiv Detail & Related papers (2025-03-31T10:26:16Z) - KPZ scaling from the Krylov space [83.88591755871734]
Recently, a superdiffusion exhibiting the Kardar-Parisi-Zhang scaling in late-time correlators and autocorrelators has been reported.
Inspired by these results, we explore the KPZ scaling in correlation functions using their realization in the Krylov operator basis.
arXiv Detail & Related papers (2024-06-04T20:57:59Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Simplex Random Features [53.97976744884616]
We present Simplex Random Features (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels.
We prove that SimRFs provide the smallest possible mean square error (MSE) on unbiased estimates of these kernels.
We show consistent gains provided by SimRFs in settings including pointwise kernel estimation, nonparametric classification and scalable Transformers.
arXiv Detail & Related papers (2023-01-31T18:53:39Z) - A kernel-based quantum random forest for improved classification [0.0]
Quantum Machine Learning (QML) to enhance traditional classical learning methods has seen various limitations to its realisation.
We extend the linear quantum support vector machine (QSVM) with kernel function computed through quantum kernel estimation (QKE)
To limit overfitting, we further extend the model to employ a low-rank Nystr"om approximation to the kernel matrix.
arXiv Detail & Related papers (2022-10-05T15:57:31Z) - Structural aspects of FRG in quantum tunnelling computations [68.8204255655161]
We probe both the unidimensional quartic harmonic oscillator and the double well potential.
Two partial differential equations for the potential V_k(varphi) and the wave function renormalization Z_k(varphi) are studied.
arXiv Detail & Related papers (2022-06-14T15:23:25Z) - Hybrid Random Features [60.116392415715275]
We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs)
HRFs automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.
arXiv Detail & Related papers (2021-10-08T20:22:59Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.