Quantum-Inspired Optimization Process for Data Imputation
- URL: http://arxiv.org/abs/2505.04841v2
- Date: Sun, 11 May 2025 00:20:39 GMT
- Title: Quantum-Inspired Optimization Process for Data Imputation
- Authors: Nishikanta Mohanty, Bikash K. Behera, Badshah Mukherjee, Christopher Ferrie,
- Abstract summary: This study introduces a novel quantum-inspired imputation framework evaluated on the UCI Diabetes dataset.<n>The method integrates Principal Component Analysis (PCA) with quantum-assisted rotations, optimized through gradient-free classicals.<n>It reconstructs missing values while preserving statistical fidelity.
- Score: 1.5186937600119894
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data imputation is a critical step in data pre-processing, particularly for datasets with missing or unreliable values. This study introduces a novel quantum-inspired imputation framework evaluated on the UCI Diabetes dataset, which contains biologically implausible missing values across several clinical features. The method integrates Principal Component Analysis (PCA) with quantum-assisted rotations, optimized through gradient-free classical optimizers -COBYLA, Simulated Annealing, and Differential Evolution to reconstruct missing values while preserving statistical fidelity. Reconstructed values are constrained within +/-2 standard deviations of original feature distributions, avoiding unrealistic clustering around central tendencies. This approach achieves a substantial and statistically significant improvement, including an average reduction of over 85% in Wasserstein distance and Kolmogorov-Smirnov test p-values between 0.18 and 0.22, compared to p-values > 0.99 in classical methods such as Mean, KNN, and MICE. The method also eliminates zero-value artifacts and enhances the realism and variability of imputed data. By combining quantum-inspired transformations with a scalable classical framework, this methodology provides a robust solution for imputation tasks in domains such as healthcare and AI pipelines, where data quality and integrity are crucial.
Related papers
- Quantum Machine Learning for Predicting Anastomotic Leak: A Clinical Study [0.16777183511743468]
Anastomotic leak (AL) is a life-threatening complication following colorectal surgery.<n>This study explores the potential of Quantum Neural Networks (QNNs) for AL prediction.
arXiv Detail & Related papers (2025-06-02T14:13:10Z) - Enhancing Treatment Effect Estimation via Active Learning: A Counterfactual Covering Perspective [61.284843894545475]
Complex algorithms for treatment effect estimation are ineffective when handling insufficiently labeled training sets.<n>We propose FCCM, which transforms the optimization objective into the textitFactual and textitCounterfactual Coverage Maximization to ensure effective radius reduction during data acquisition.<n> benchmarking FCCM against other baselines demonstrates its superiority across both fully synthetic and semi-synthetic datasets.
arXiv Detail & Related papers (2025-05-08T13:42:00Z) - CALF: A Conditionally Adaptive Loss Function to Mitigate Class-Imbalanced Segmentation [0.2902243522110345]
Imbalanced datasets pose a challenge in training deep learning (DL) models for medical diagnostics.<n>We propose a novel, statistically driven, conditionally adaptive loss function (CALF) tailored to accommodate the conditions of imbalanced datasets in DL training.
arXiv Detail & Related papers (2025-04-06T12:03:33Z) - Enhanced ECG Arrhythmia Detection Accuracy by Optimizing Divergence-Based Data Fusion [5.575308369829893]
We propose a feature-based fusion algorithm utilizing Kernel Density Estimation (KDE) and Kullback-Leibler (KL) divergence.<n>Using our in-house datasets consisting of ECG signals collected from 2000 healthy and 2000 diseased individuals, we verify our method by using the publicly available PTB-XL dataset.<n>The results demonstrate that the proposed fusion method significantly enhances feature-based classification accuracy for abnormal ECG cases in the merged datasets.
arXiv Detail & Related papers (2025-03-19T12:16:48Z) - Learning Massive-scale Partial Correlation Networks in Clinical Multi-omics Studies with HP-ACCORD [10.459304300065186]
We introduce a novel pseudolikelihood-based graphical model framework.<n>It maintains estimation and selection consistency in various metrics under high-dimensional assumptions.<n>A high-performance computing implementation of our framework was tested in simulated data with up to one million variables.
arXiv Detail & Related papers (2024-12-16T08:38:02Z) - Instance-based Learning with Prototype Reduction for Real-Time
Proportional Myocontrol: A Randomized User Study Demonstrating
Accuracy-preserving Data Reduction for Prosthetic Embedded Systems [0.0]
This work presents the design, implementation and validation of learning techniques based on the kNN scheme for gesture detection in prosthetic control.
The influence of parameterization and varying proportionality schemes is analyzed, utilizing an eight-channel-sEMG armband.
arXiv Detail & Related papers (2023-08-21T20:15:35Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.