AiDE-Q: Synthetic Labeled Datasets Can Enhance Learning Models for Quantum Property Estimation
- URL: http://arxiv.org/abs/2509.26109v1
- Date: Tue, 30 Sep 2025 11:29:14 GMT
- Title: AiDE-Q: Synthetic Labeled Datasets Can Enhance Learning Models for Quantum Property Estimation
- Authors: Xinbiao Wang, Yuxuan Du, Zihan Lou, Yang Qian, Kaining Zhang, Yong Luo, Bo Du, Dacheng Tao,
- Abstract summary: AiDE-Q iteratively generates high-quality synthetic labeled datasets.<n>We conduct extensive numerical simulations on a diverse set of quantum many-body and molecular systems.
- Score: 83.22330172077308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum many-body problems are central to various scientific disciplines, yet their ground-state properties are intrinsically challenging to estimate. Recent advances in deep learning (DL) offer potential solutions in this field, complementing prior purely classical and quantum approaches. However, existing DL-based models typically assume access to a large-scale and noiseless labeled dataset collected by infinite sampling. This idealization raises fundamental concerns about their practical utility, especially given the limited availability of quantum hardware in the near term. To unleash the power of these DL-based models, we propose AiDE-Q (\underline{a}utomat\underline{i}c \underline{d}ata \underline{e}ngine for \underline{q}uantum property estimation), an effective framework that addresses this challenge by iteratively generating high-quality synthetic labeled datasets. Specifically, AiDE-Q utilizes a consistency-check method to assess the quality of synthetic labels and continuously improves the employed DL models with the identified high-quality synthetic dataset. To verify the effectiveness of AiDE-Q, we conduct extensive numerical simulations on a diverse set of quantum many-body and molecular systems, with up to 50 qubits. The results show that AiDE-Q enhances prediction performance for various reference learning models, with improvements of up to $14.2\%$. Moreover, we exhibit that a basic supervised learning model integrated with AiDE-Q outperforms advanced reference models, highlighting the importance of a synthetic dataset. Our work paves the way for more efficient and practical applications of DL for quantum property estimation.
Related papers
- Towards Syn-to-Real IQA: A Novel Perspective on Reshaping Synthetic Data Distributions [74.00222571094437]
Blind Image Quality Assessment (BIQA) has advanced significantly through deep learning, but the scarcity of large-scale labeled datasets remains a challenge.<n>We make a key observation that representations learned from synthetic datasets often exhibit a discrete and clustered pattern that hinders regression performance.<n>We introduce a novel framework SynDR-IQA, which reshapes synthetic data distribution to enhance BIQA generalization.
arXiv Detail & Related papers (2026-01-01T06:11:16Z) - Enriching Earth Observation labeled data with Quantum Conditioned Diffusion Models [38.62950622229361]
We introduce the Quanvolutional Conditioned U-Net (QCU-Net), a hybrid quantum--classical architecture that applies quantum operations within a conditioned diffusion framework.<n>Experiments on the EuroSAT RGB dataset demonstrate that our QCU-Net achieves superior results.<n>This work represents the first successful adaptation of class-conditioned quantum diffusion modeling in the Earth Observation domain.
arXiv Detail & Related papers (2025-12-23T15:40:31Z) - Quantum-Aware Generative AI for Materials Discovery: A Framework for Robust Exploration Beyond DFT Biases [0.0]
We introduce a quantum-aware generative AI framework for materials discovery.<n>We implement a robust active learning loop that quantifies and targets the divergence between low- and high-fidelity predictions.<n>Our results demonstrate a 3-5x improvement in successfully identifying potentially stable candidates in high-divergence regions.
arXiv Detail & Related papers (2025-12-13T11:17:21Z) - Quantum-Accelerated Neural Imputation with Large Language Models (LLMs) [0.0]
This paper introduces Quantum-UnIMP, a novel framework that integrates shallow quantum circuits into an LLM-based imputation architecture.<n>Our experiments on benchmark mixed-type datasets demonstrate that Quantum-UnIMP reduces imputation error by up to 15.2% for numerical features (RMSE) and improves classification accuracy by 8.7% for categorical features (F1-Score) compared to state-of-the-art classical and LLM-based methods.
arXiv Detail & Related papers (2025-07-11T02:00:06Z) - LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model [6.074775040047959]
We propose a compact, lightweight NR-IQA model that achieves state-of-the-art (SOTA) performance on ECCV AIM UHD-IQA challenge validation and test datasets.
Our model features a dual-branch architecture, with each branch separately trained on synthetically and authentically distorted images.
Our evaluation considering various open-source datasets highlights the practical, high-accuracy, and robust performance of our proposed lightweight model.
arXiv Detail & Related papers (2024-08-30T07:32:19Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Evaluating quantum generative models via imbalanced data classification
benchmarks [0.0]
We analyze synthetic data generated from a hybrid quantum-classical neural network adapted from twenty different real-world data sets.
We leverage this approach to elucidate the qualities of a problem that make it more or less likely to be amenable to a hybrid quantum-classical generative model.
arXiv Detail & Related papers (2023-08-21T16:46:36Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Zero-shot Adversarial Quantization [11.722728148523366]
We propose a zero-shot adversarial quantization (ZAQ) framework, facilitating effective discrepancy estimation and knowledge transfer.
This is achieved by a novel two-level discrepancy modeling to drive a generator to synthesize informative and diverse data examples.
We conduct extensive experiments on three fundamental vision tasks, demonstrating the superiority of ZAQ over the strong zero-shot baselines.
arXiv Detail & Related papers (2021-03-29T01:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.