Benchmark of the Full and Reduced Effective Resistance Kernel for Molecular Classification
- URL: http://arxiv.org/abs/2501.19352v1
- Date: Fri, 31 Jan 2025 17:58:54 GMT
- Title: Benchmark of the Full and Reduced Effective Resistance Kernel for Molecular Classification
- Authors: Adam WesoĊowski, Karim Essafi,
- Abstract summary: We study the commute time kernel method via the effective resistance framework analyzing the quantum complexity of the originally classical approach.
Our investigation highlights a notable quantum speedup in calculating the kernel, which offers a quadratic improvement in time complexity over classical approaches.
Benchmarking our method on several chemistry-based datasets, shows that while not always the most accurate, it excels in time efficiency.
- Score: 0.0
- License:
- Abstract: We present a comprehensive study of the commute time kernel method via the effective resistance framework analyzing the quantum complexity of the originally classical approach. Our study reveals that while there is a trade-off between accuracy and computational complexity, significant improvements can be achieved in terms of runtime efficiency without substantially compromising on precision. Our investigation highlights a notable quantum speedup in calculating the kernel, which offers a quadratic improvement in time complexity over classical approaches in certain instances. In addition, we introduce methodical improvements over the original work on the commute time kernel and provide empirical evidence suggesting the potential reduction of kernel queries without significant impact on result accuracy. Benchmarking our method on several chemistry-based datasets: $\tt{AIDS}$, $\tt{NCL1}$, $\tt{PTC-MR}$, $\tt{MUTAG}$, $\tt{PROTEINS}$ - data points previously unexplored in existing literature, shows that while not always the most accurate, it excels in time efficiency. This makes it a compelling alternative for applications where computational speed is crucial. Our results highlight the balance between accuracy, computational complexity, and speedup offered by quantum computing, promoting further research into efficient algorithms for kernel methods and their applications in chemistry-based datasets.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Center-Sensitive Kernel Optimization for Efficient On-Device Incremental Learning [88.78080749909665]
Current on-device training methods just focus on efficient training without considering the catastrophic forgetting.
This paper proposes a simple but effective edge-friendly incremental learning framework.
Our method achieves average accuracy boost of 38.08% with even less memory and approximate computation.
arXiv Detail & Related papers (2024-06-13T05:49:29Z) - Towards Efficient Quantum Anomaly Detection: One-Class SVMs using
Variable Subsampling and Randomized Measurements [4.180897432770239]
Quantum computing allows significant advancements in kernel calculation and model precision.
We present two distinct approaches: utilizing randomized measurements to evaluate the quantum kernel and implementing the variable subsampling ensemble method.
Experimental results demonstrate a substantial reduction in training and inference times by up to 95% and 25% respectively.
Although unstable, the average precision of randomized measurements discernibly surpasses that of the classical Radial Basis Function kernel.
arXiv Detail & Related papers (2023-12-14T17:42:18Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.
We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.
The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Accelerating Real-Time Coupled Cluster Methods with Single-Precision
Arithmetic and Adaptive Numerical Integration [3.469636229370366]
We show that single-precision arithmetic reduces both the storage and multiplicative costs of the real-time simulation by approximately a factor of two.
Additional speedups of up to a factor of 14 in test simulations of water clusters are obtained via a straightforward-based implementation.
arXiv Detail & Related papers (2022-05-10T21:21:49Z) - The Fast Kernel Transform [21.001203328543006]
We propose the Fast Kernel Transform (FKT), a general algorithm to compute matrix-vector multiplications for datasets in moderate dimensions with quasilinear complexity.
The FKT is easily applied to a broad class of kernels, including Gaussian, Matern, and Rational Quadratic covariance functions and physically motivated Green's functions.
We illustrate the efficacy and versatility of the FKT by providing timing and accuracy benchmarks and by applying it to scale the neighborhood embedding (t-SNE) and Gaussian processes to large real-world data sets.
arXiv Detail & Related papers (2021-06-08T16:15:47Z) - Adaptive Local Kernels Formulation of Mutual Information with
Application to Active Post-Seismic Building Damage Inference [1.066048003460524]
Post-earthquake regional damage assessment of buildings is an expensive task.
The information theoretic measure of mutual information is one of the most effective criteria to evaluate the effectiveness of the samples.
A local kernels strategy was proposed to reduce the computational costs, but the adaptability of the kernels to the observed labels was not considered.
In this article, an adaptive local kernels methodology is developed that allows for the conformability of the kernels to the observed output data.
arXiv Detail & Related papers (2021-05-24T18:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.