Quantum Rationale-Aware Graph Contrastive Learning for Jet Discrimination
- URL: http://arxiv.org/abs/2411.01642v4
- Date: Wed, 21 May 2025 19:10:37 GMT
- Title: Quantum Rationale-Aware Graph Contrastive Learning for Jet Discrimination
- Authors: Md Abrar Jahin, Md. Akmol Masud, M. F. Mridha, Nilanjan Dey, Zeyar Aung,
- Abstract summary: In high-energy physics, particle jet tagging plays a pivotal role in distinguishing quark from gluon jets.<n>Existing contrastive learning frameworks struggle to leverage rationale-aware augmentations effectively.<n>We show that integrating a quantum rationale generator within our proposed Quantum Rationale-aware Graph Contrastive Learning framework significantly enhances jet discrimination performance.
- Score: 1.927711700724334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In high-energy physics, particle jet tagging plays a pivotal role in distinguishing quark from gluon jets using data from collider experiments. While graph-based deep learning methods have advanced this task beyond traditional feature-engineered approaches, the complex data structure and limited labeled samples present ongoing challenges. However, existing contrastive learning (CL) frameworks struggle to leverage rationale-aware augmentations effectively, often lacking supervision signals that guide the extraction of salient features and facing computational efficiency issues such as high parameter counts. In this study, we demonstrate that integrating a quantum rationale generator (QRG) within our proposed Quantum Rationale-aware Graph Contrastive Learning (QRGCL) framework significantly enhances jet discrimination performance, reducing reliance on labeled data and capturing discriminative features. Evaluated on the quark-gluon jet dataset, QRGCL achieves an AUC score of $77.53\%$ while maintaining a compact architecture of only 45 QRG parameters, outperforming classical, quantum, and hybrid GCL and GNN benchmarks. These results highlight QRGCL's potential to advance jet tagging and other complex classification tasks in high-energy physics, where computational efficiency and feature extraction limitations persist.
Related papers
- Resource-Efficient Hadamard Test Circuits for Nonlinear Dynamics on a Trapped-Ion Quantum Computer [1.2063443893298391]
We propose a low-depth implementation of a class of Hadamard test circuits.<n>We develop a parameterized quantum ansatz specifically tailored for variational algorithms.<n>Our findings demonstrate a significant reduction in single- and two-qubit gate counts.
arXiv Detail & Related papers (2025-07-25T13:16:54Z) - Quantum-Accelerated Neural Imputation with Large Language Models (LLMs) [0.0]
This paper introduces Quantum-UnIMP, a novel framework that integrates shallow quantum circuits into an LLM-based imputation architecture.<n>Our experiments on benchmark mixed-type datasets demonstrate that Quantum-UnIMP reduces imputation error by up to 15.2% for numerical features (RMSE) and improves classification accuracy by 8.7% for categorical features (F1-Score) compared to state-of-the-art classical and LLM-based methods.
arXiv Detail & Related papers (2025-07-11T02:00:06Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Quantum Graph Transformer for NLP Sentiment Classification [0.0]
We present the Quantum Graph Transformer (QGT), a hybrid graph-based architecture that integrates a quantum self-attention mechanism.<n>QGT consistently achieves higher or comparable accuracy than existing quantum natural language processing (QNLP) models.<n>These results highlight the potential of graph-based QNLP techniques for advancing efficient and scalable language understanding.
arXiv Detail & Related papers (2025-06-09T16:55:41Z) - Squeeze and Excitation: A Weighted Graph Contrastive Learning for Collaborative Filtering [1.3535213052193866]
Graph contrastive learning (GCL) aims to enhance the robustness of representation learning.
Weighted Graph Contrastive Learning framework (WeightedGCL) addresses the irrational allocation of feature attention.
WeightedGCL achieves significant accuracy improvements compared to competitive baselines.
arXiv Detail & Related papers (2025-04-06T11:30:59Z) - GranQ: Granular Zero-Shot Quantization with Unified Layer-Channel Awareness [1.8067835669244101]
GranQ is a novel ZSQ approach that leverages layer-channel awareness to minimize the quantization error.
GranQ achieves superior performance compared with those of state-of-the-art ZSQ methods that employ quantization-aware training.
arXiv Detail & Related papers (2025-03-24T04:44:21Z) - Quantum Bayesian Networks for Machine Learning in Oil-Spill Detection [3.9554540293311864]
Quantum Machine Learning has shown promise in diverse applications such as environmental monitoring, healthcare diagnostics, and financial modeling.
One critical challenge is handling imbalanced datasets, where rare events are often misclassified due to skewed data distributions.
This paper introduces a Bayesian approach utilizing QBNs to classify satellite-derived imbalanced datasets, distinguishing oil-spill'' from non-spill'' regions.
arXiv Detail & Related papers (2024-12-24T15:44:26Z) - Graph Structure Refinement with Energy-based Contrastive Learning [56.957793274727514]
We introduce an unsupervised method based on a joint of generative training and discriminative training to learn graph structure and representation.
We propose an Energy-based Contrastive Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as ECL-GSR.
ECL-GSR achieves faster training with fewer samples and memories against the leading baseline, highlighting its simplicity and efficiency in downstream tasks.
arXiv Detail & Related papers (2024-12-20T04:05:09Z) - Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective [125.00228936051657]
We introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features.
By fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks.
arXiv Detail & Related papers (2024-07-24T09:30:04Z) - Jet Discrimination with Quantum Complete Graph Neural Network [1.684646794156297]
We propose the Quantum Complete Graph Neural Network (QCGNN), which is a variational quantum algorithm based on complete graphs.
We investigate the application of QCGNN with the challenging task of jet discrimination, where the jets are represented as complete graphs.
arXiv Detail & Related papers (2024-03-08T02:02:23Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - GQHAN: A Grover-inspired Quantum Hard Attention Network [53.96779043113156]
Grover-inspired Quantum Hard Attention Mechanism (GQHAM) is proposed.
GQHAN adeptly surmounts the non-differentiability hurdle, surpassing the efficacy of extant quantum soft self-attention mechanisms.
The proposal of GQHAN lays the foundation for future quantum computers to process large-scale data, and promotes the development of quantum computer vision.
arXiv Detail & Related papers (2024-01-25T11:11:16Z) - Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly [4.646930308096446]
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
We propose a Sequential Hamiltonian Assembly, which iteratively approximates the loss function using local components.
Our approach outperforms conventional parameter training by 29.99% and the empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
arXiv Detail & Related papers (2023-12-09T11:47:32Z) - Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - Quantum Data Encoding: A Comparative Analysis of Classical-to-Quantum
Mapping Techniques and Their Impact on Machine Learning Accuracy [0.0]
This research explores the integration of quantum data embedding techniques into classical machine learning (ML) algorithms.
Our findings reveal that quantum data embedding contributes to improved classification accuracy and F1 scores.
arXiv Detail & Related papers (2023-11-17T08:00:08Z) - Coreset selection can accelerate quantum machine learning models with
provable generalization [6.733416056422756]
Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning.
We present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels.
arXiv Detail & Related papers (2023-09-19T08:59:46Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Hybrid Quantum-Classical Graph Convolutional Network [7.0132255816377445]
This research provides a hybrid quantum-classical graph convolutional network (QGCNN) for learning HEP data.
The proposed framework demonstrates an advantage over classical multilayer perceptron and convolutional neural networks in the aspect of number of parameters.
In terms of testing accuracy, the QGCNN shows comparable performance to a quantum convolutional neural network on the same HEP dataset.
arXiv Detail & Related papers (2021-01-15T16:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.