CRC-SGAD: Conformal Risk Control for Supervised Graph Anomaly Detection
- URL: http://arxiv.org/abs/2504.02248v1
- Date: Thu, 03 Apr 2025 03:27:49 GMT
- Title: CRC-SGAD: Conformal Risk Control for Supervised Graph Anomaly Detection
- Authors: Songran Bai, Xiaolong Zheng, Daniel Dajun Zeng,
- Abstract summary: We propose a framework integrating statistical risk control into Graph Anomaly Detection (GAD)<n>A Dual-Threshold Conformal Risk Control mechanism provides theoretically guaranteed bounds for both False Negative Rate (FNR) and False Positive Rate (FPR)<n>Experiments on four datasets and five GAD models demonstrate statistically significant improvements in FNR and FPR control and prediction set size.
- Score: 2.290229842388034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Anomaly Detection (GAD) is critical in security-sensitive domains, yet faces reliability challenges: miscalibrated confidence estimation (underconfidence in normal nodes, overconfidence in anomalies), adversarial vulnerability of derived confidence score under structural perturbations, and limited efficacy of conventional calibration methods for sparse anomaly patterns. Thus we propose CRC-SGAD, a framework integrating statistical risk control into GAD via two innovations: (1) A Dual-Threshold Conformal Risk Control mechanism that provides theoretically guaranteed bounds for both False Negative Rate (FNR) and False Positive Rate (FPR) through providing prediction sets; (2) A Subgraph-aware Spectral Graph Neural Calibrator (SSGNC) that optimizes node representations through adaptive spectral filtering while reducing the size of prediction sets via hybrid loss optimization. Experiments on four datasets and five GAD models demonstrate statistically significant improvements in FNR and FPR control and prediction set size. CRC-SGAD establishes a paradigm for statistically rigorous anomaly detection in graph-structured security applications.
Related papers
- Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction [0.0]
We propose a model-agnostic uncertainty quantification method that integrates dynamic threshold calibration and cross-modal consistency verification.
We show that the framework achieves stable performance across varying calibration-to-test split ratios, underscoring its robustness for real-world deployment in healthcare, autonomous systems, and other safety-sensitive domains.
This work bridges the gap between theoretical reliability and practical applicability in multi-modal AI systems, offering a scalable solution for hallucination detection and uncertainty-aware decision-making.
arXiv Detail & Related papers (2025-04-24T15:39:46Z) - Conditional Conformal Risk Adaptation [9.559062601251464]
We develop a new score function for creating adaptive prediction sets that significantly improve conditional risk control for segmentation tasks.
We introduce a specialized probability calibration framework that enhances the reliability of pixel-wise inclusion estimates.
Our experiments on polyp segmentation demonstrate that all three methods provide valid marginal risk control and deliver more consistent conditional risk control.
arXiv Detail & Related papers (2025-04-10T10:01:06Z) - FAIR-SIGHT: Fairness Assurance in Image Recognition via Simultaneous Conformal Thresholding and Dynamic Output Repair [4.825037489691159]
We introduce a post-hoc framework designed to ensure fairness in computer vision systems by combining conformal prediction with a dynamic output repair mechanism.
Our approach calculates a fairness-aware non-conformity score that simultaneously assesses prediction errors and fairness violations.
When the non-conformity score for a new image exceeds the threshold, FAIR-SIGHT implements targeted corrective adjustments, such as logit shifts for classification and confidence recalibration for detection.
arXiv Detail & Related papers (2025-04-10T02:23:06Z) - Statistical Management of the False Discovery Rate in Medical Instance Segmentation Based on Conformal Risk Control [2.4578723416255754]
Instance segmentation plays a pivotal role in medical image analysis by enabling precise localization and delineation of lesions, tumors, and anatomical structures.
Deep learning models such as Mask R-CNN and BlendMask have achieved remarkable progress, but their application in high-risk medical scenarios remains constrained by confidence calibration issues.
We propose a robust quality control framework based on conformal prediction theory to address this challenge.
arXiv Detail & Related papers (2025-04-06T13:31:19Z) - Risk-Calibrated Affective Speech Recognition via Conformal Coverage Guarantees: A Stochastic Calibrative Framework for Emergent Uncertainty Quantification [0.0]
Traffic safety challenges arising from extreme driver emotions highlight the urgent need for reliable emotion recognition systems.<n>Traditional deep learning approaches in speech emotion recognition suffer from overfitting and poorly calibrated confidence estimates.<n>We propose a framework integrating Conformal Prediction (CP) and Risk Control,using Mel-spectrogram features processed through a pre-trained neural network.
arXiv Detail & Related papers (2025-03-24T12:26:28Z) - A Label-Free Heterophily-Guided Approach for Unsupervised Graph Fraud Detection [60.09453163562244]
We propose a Heterophily-guided Unsupervised Graph fraud dEtection approach (HUGE) for unsupervised GFD.<n>In the estimation module, we design a novel label-free heterophily metric called HALO, which captures the critical graph properties for GFD.<n>In the alignment-based fraud detection module, we develop a joint-GNN architecture with ranking loss and asymmetric alignment loss.
arXiv Detail & Related papers (2025-02-18T22:07:36Z) - SCADE: Scalable Framework for Anomaly Detection in High-Performance System [0.0]
Command-line interfaces remain integral to high-performance computing environments.<n>Traditional security solutions struggle to detect anomalies due to their context-specific nature, lack of labeled data, and the prevalence of sophisticated attacks like Living-off-the-Land (LOL)<n>We introduce the Scalable Command-Line Anomaly Detection Engine (SCADE), a framework that combines global statistical models with local context-specific analysis for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-12-05T15:39:13Z) - ConU: Conformal Uncertainty in Large Language Models with Correctness Coverage Guarantees [68.33498595506941]
We introduce a novel uncertainty measure based on self-consistency theory.
We then develop a conformal uncertainty criterion by integrating the uncertainty condition aligned with correctness into the CP algorithm.
Empirical evaluations indicate that our uncertainty measure outperforms prior state-of-the-art methods.
arXiv Detail & Related papers (2024-06-29T17:33:07Z) - Conditional Shift-Robust Conformal Prediction for Graph Neural Network [0.0]
Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data.<n>Despite their efficacy, GNNs have limited ability to provide robust uncertainty estimates.<n>We propose Conditional Shift Robust (CondSR) conformal prediction for GNNs.
arXiv Detail & Related papers (2024-05-20T11:47:31Z) - Forking Uncertainties: Reliable Prediction and Model Predictive Control
with Sequence Models via Conformal Risk Control [40.918012779935246]
We introduce a novel post-hoc calibration procedure that operates on the predictions produced by any pre-designed probabilistic forecaster to yield reliable error bars.
Unlike the state of the art, PTS-CRC can satisfy reliability definitions beyond coverage.
We experimentally validate the performance of PTS-CRC prediction and control by studying a number of use cases in the context of wireless networking.
arXiv Detail & Related papers (2023-10-16T11:35:41Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.