Seeing the Invisible: Machine learning-Based QPI Kernel Extraction via Latent Alignment
- URL: http://arxiv.org/abs/2506.05325v1
- Date: Thu, 05 Jun 2025 17:58:09 GMT
- Title: Seeing the Invisible: Machine learning-Based QPI Kernel Extraction via Latent Alignment
- Authors: Yingshuai Ji, Haomin Zhuang, Matthew Toole, James McKenzie, Xiaolong Liu, Xiangliang Zhang,
- Abstract summary: Quasiparticle interference (QPI) imaging is a powerful tool for probing electronic structures in quantum materials.<n> extracting the single-scatterer QPI pattern from a multi-scatterer image remains a fundamentally ill-posed inverse problem.<n>We propose the first AI-based framework for QPI kernel extraction.
- Score: 20.138206974466772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quasiparticle interference (QPI) imaging is a powerful tool for probing electronic structures in quantum materials, but extracting the single-scatterer QPI pattern (i.e., the kernel) from a multi-scatterer image remains a fundamentally ill-posed inverse problem. In this work, we propose the first AI-based framework for QPI kernel extraction. We introduce a two-step learning strategy that decouples kernel representation learning from observation-to-kernel inference. In the first step, we train a variational autoencoder to learn a compact latent space of scattering kernels. In the second step, we align the latent representation of QPI observations with those of the pre-learned kernels using a dedicated encoder. This design enables the model to infer kernels robustly even under complex, entangled scattering conditions. We construct a diverse and physically realistic QPI dataset comprising 100 unique kernels and evaluate our method against a direct one-step baseline. Experimental results demonstrate that our approach achieves significantly higher extraction accuracy, and improved generalization to unseen kernels.
Related papers
- Scalable Gaussian Processes with Low-Rank Deep Kernel Decomposition [7.532273334759435]
Kernels are key to encoding prior beliefs and data structures in Gaussian process (GP) models.<n>Deep kernel learning enhances kernel flexibility by feeding inputs through a neural network before applying a standard parametric form.<n>We introduce a fully data-driven, scalable deep kernel representation where a neural network directly represents a low-rank kernel.
arXiv Detail & Related papers (2025-05-24T05:42:11Z) - Predicting Open-Hole Laminates Failure Using Support Vector Machines With Classical and Quantum Kernels [2.0039767863372506]
We show how to train surrogate models to learn the ultimate failure envelope of an open hole composite plate under in-plane loading.
Thanks to kernel-target alignment optimization, we tune the free parameters of all kernels to best separate safe and failure-inducing loading states.
arXiv Detail & Related papers (2024-05-05T11:48:50Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Quantum Multiple Kernel Learning [1.9116668545881028]
Kernel methods play an important role in machine learning applications due to their conceptual simplicity and superior performance.
One approach to enhancing the expressivity of kernel machines is to combine multiple individual kernels.
We propose quantum MKL, which combines multiple quantum kernels.
arXiv Detail & Related papers (2020-11-19T07:19:41Z) - Isolation Distributional Kernel: A New Tool for Point & Group Anomaly
Detection [76.1522587605852]
Isolation Distributional Kernel (IDK) is a new way to measure the similarity between two distributions.
We demonstrate IDK's efficacy and efficiency as a new tool for kernel based anomaly detection for both point and group anomalies.
arXiv Detail & Related papers (2020-09-24T12:25:43Z) - Learning Deep Kernels for Non-Parametric Two-Sample Tests [50.92621794426821]
We propose a class of kernel-based two-sample tests, which aim to determine whether two sets of samples are drawn from the same distribution.
Our tests are constructed from kernels parameterized by deep neural nets, trained to maximize test power.
arXiv Detail & Related papers (2020-02-21T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.