Feature Ranking in Credit-Risk with Qudit-Based Networks
- URL: http://arxiv.org/abs/2511.19150v1
- Date: Mon, 24 Nov 2025 14:15:57 GMT
- Title: Feature Ranking in Credit-Risk with Qudit-Based Networks
- Authors: Georgios Maragkopoulos, Lazaros Chavatzoglou, Aikaterini Mandilara, Dimitris Syvridis,
- Abstract summary: In finance, predictive models must balance accuracy and interpretability.<n>We present a quantum neural network (QNN) based on a single qudit.<n>We benchmark our model on a real-world, imbalanced credit-risk dataset from Taiwan.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In finance, predictive models must balance accuracy and interpretability, particularly in credit risk assessment, where model decisions carry material consequences. We present a quantum neural network (QNN) based on a single qudit, in which both data features and trainable parameters are co-encoded within a unified unitary evolution generated by the full Lie algebra. This design explores the entire Hilbert space while enabling interpretability through the magnitudes of the learned coefficients. We benchmark our model on a real-world, imbalanced credit-risk dataset from Taiwan. The proposed QNN consistently outperforms LR and reaches the results of random forest models in macro-F1 score while preserving a transparent correspondence between learned parameters and input feature importance. To quantify the interpretability of the proposed model, we introduce two complementary metrics: (i) the edit distance between the model's feature ranking and that of LR, and (ii) a feature-poisoning test where selected features are replaced with noise. Results indicate that the proposed quantum model achieves competitive performance while offering a tractable path toward interpretable quantum learning.
Related papers
- D-Models and E-Models: Diversity-Stability Trade-offs in the Sampling Behavior of Large Language Models [91.21455683212224]
In large language models (LLMs), the probability of relevance for the next piece of information is linked to the probability of relevance for the next product.<n>But whether fine-grained sampling probabilities faithfully align with task requirements remains an open question.<n>We identify two model types: D-models, whose P_token exhibits large step-to-step variability and poor alignment with P_task; and E-models, whose P_token is more stable and better aligned with P_task.
arXiv Detail & Related papers (2026-01-25T14:59:09Z) - A Novel XAI-Enhanced Quantum Adversarial Networks for Velocity Dispersion Modeling in MaNGA Galaxies [14.016108312641101]
We propose a novel quantum adversarial framework that integrates a hybrid quantum neural network (QNN) with classical deep learning layers.<n>In the proposed model, an adversarial evaluator concurrently guides the QNN by computing feedback loss, thereby optimizing both prediction accuracy and model explainability.<n> Empirical evaluations show that the Vanilla model achieves RMSE = 0.27, MSE = 0.071, MAE = 0.21, and R2 = 0.59, delivering the most consistent performance across regression metrics compared to adversarial counterparts.
arXiv Detail & Related papers (2025-10-28T16:27:10Z) - IQNN-CS: Interpretable Quantum Neural Network for Credit Scoring [2.2133667529581933]
We present IQNN-CS, an interpretable quantum neural network framework for multiclass credit risk classification.<n>ICAA is a novel metric that quantifies attribution divergence across predicted classes, revealing how the model distinguishes between credit risk categories.<n>Our results highlight a practical path toward transparent and accountable QML models for financial decision-making.
arXiv Detail & Related papers (2025-10-16T18:02:03Z) - Learning Compact Representations of LLM Abilities via Item Response Theory [35.74367665390977]
We explore how to learn compact representations of large language models (LLMs)<n>We frame this problem as estimating the probability that a given model will correctly answer a specific query.<n>To learn these parameters jointly, we introduce a Mixture-of-Experts (MoE) network that couples model- and query-level embeddings.
arXiv Detail & Related papers (2025-10-01T12:55:34Z) - Model Correlation Detection via Random Selection Probing [62.093777777813756]
Existing similarity-based methods require access to model parameters or produce scores without thresholds.<n>We introduce Random Selection Probing (RSP), a hypothesis-testing framework that formulates model correlation detection as a statistical test.<n>RSP produces rigorous p-values that quantify evidence of correlation.
arXiv Detail & Related papers (2025-09-29T01:40:26Z) - A recursive Bayesian neural network for constitutive modeling of sands under monotonic and cyclic loading [0.0]
In engineering, models are central to capturing soil behavior across diverse drainage conditions, stress paths,and loading histories.<n>This study introduces a recursive Bayesian neural network (rBNN) framework that unifies temporal sequence learning with generalized inference.<n>The framework is validated against four datasets spanning both simulated and experimental triaxial tests.
arXiv Detail & Related papers (2025-01-17T10:15:03Z) - Evaluating Generative Language Models in Information Extraction as Subjective Question Correction [49.729908337372436]
We propose a new evaluation method, SQC-Score.
Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score.
Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics.
arXiv Detail & Related papers (2024-04-04T15:36:53Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Uncertainty quantification of two-phase flow in porous media via
coupled-TgNN surrogate model [6.705438773768439]
Uncertainty quantification (UQ) of subsurface two-phase flow usually requires numerous executions of forward simulations under varying conditions.
In this work, a novel coupled theory-guided neural network (TgNN) based surrogate model is built to facilitate efficiency under the premise of satisfactory accuracy.
arXiv Detail & Related papers (2022-05-28T02:33:46Z) - Regularized Sequential Latent Variable Models with Adversarial Neural
Networks [33.74611654607262]
We will present different ways of using high level latent random variables in RNN to model the variability in the sequential data.
We will explore possible ways of using adversarial method to train a variational RNN model.
arXiv Detail & Related papers (2021-08-10T08:05:14Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.