Reliable Hierarchical Operating System Fingerprinting via Conformal Prediction
- URL: http://arxiv.org/abs/2602.12825v1
- Date: Fri, 13 Feb 2026 11:20:48 GMT
- Title: Reliable Hierarchical Operating System Fingerprinting via Conformal Prediction
- Authors: Rubén Pérez-Jove, Osvaldo Simeone, Alejandro Pazos, Jose Vázquez-Naya,
- Abstract summary: Conformal Prediction (CP) could be wrapped around existing methods to obtain prediction sets with guaranteed coverage.<n>This work addresses these limitations by introducing and evaluating two distinct structured CP strategies.<n>While both methods satisfy validity guarantees, they expose a fundamental trade-off between level-wise efficiency and structural consistency.
- Score: 62.40452053128524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operating System (OS) fingerprinting is critical for network security, but conventional methods do not provide formal uncertainty quantification mechanisms. Conformal Prediction (CP) could be directly wrapped around existing methods to obtain prediction sets with guaranteed coverage. However, a direct application of CP would treat OS identification as a flat classification problem, ignoring the natural taxonomic structure of OSs and providing brittle point predictions. This work addresses these limitations by introducing and evaluating two distinct structured CP strategies: level-wise CP (L-CP), which calibrates each hierarchy level independently, and projection-based CP (P-CP), which ensures structural consistency by projecting leaf-level sets upwards. Our results demonstrate that, while both methods satisfy validity guarantees, they expose a fundamental trade-off between level-wise efficiency and structural consistency. L-CP yields tighter prediction sets suitable for human forensic analysis but suffers from taxonomic inconsistencies. Conversely, P-CP guarantees hierarchically consistent, nested sets ideal for automated policy enforcement, albeit at the cost of reduced efficiency at coarser levels.
Related papers
- Hierarchical Conformal Classification [5.964388602612373]
Conformal prediction (CP) is a powerful framework for quantifying uncertainty in machine learning models.<n>Standard CP treats classes as flat and unstructured, ignoring relationships such as semantic or hierarchical structure among class labels.<n>This paper presents hierarchical HCCal classification (HCC), an extension of CP that incorporates class hierarchies into both the structure and semantics of prediction sets.
arXiv Detail & Related papers (2025-08-18T18:05:55Z) - Conformal Prediction for Privacy-Preserving Machine Learning [83.88591755871734]
Using AES-encrypted variants of the MNIST dataset, we demonstrate that Conformal Prediction methods remain effective even when applied directly in the encrypted domain.<n>Our work sets a foundation for principled uncertainty quantification in secure, privacy-aware learning systems.
arXiv Detail & Related papers (2025-07-13T15:29:14Z) - Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks [7.428082880875367]
Conformal Prediction (CP) has proven to be an effective method for improving the trustworthiness of neural networks.<n>We propose a new method that leverages Lipschitz-bounded networks to precisely and efficiently estimate robust CP sets.<n>Our lip-rcp method makes this second approach as efficient as vanilla CP while also allowing robustness guarantees.
arXiv Detail & Related papers (2025-06-05T09:38:14Z) - Evidential Uncertainty Sets in Deep Classifiers Using Conformal Prediction [1.2430809884830318]
We propose Evidential Conformal Prediction (ECP) method for image classifiers to generate conformal prediction sets.
Our method is based on a non-conformity score function that has its roots in Evidential Deep Learning (EDL)
arXiv Detail & Related papers (2024-06-16T03:00:16Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Probabilistically robust conformal prediction [9.401004747930974]
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks.
Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms.
This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations.
arXiv Detail & Related papers (2023-07-31T01:32:06Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z) - Selective Classification via One-Sided Prediction [54.05407231648068]
One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
arXiv Detail & Related papers (2020-10-15T16:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.