INTACT: Intent-Aware Representation Learning for Cryptographic Traffic Violation Detection
- URL: http://arxiv.org/abs/2602.21252v1
- Date: Sun, 22 Feb 2026 07:38:46 GMT
- Title: INTACT: Intent-Aware Representation Learning for Cryptographic Traffic Violation Detection
- Authors: Rahul D Ray,
- Abstract summary: INTACT (INTent-Aware Cryptographic Traffic) is a policy-conditioned framework that reformulates violation detection as conditional constraint learning.<n>We evaluate the framework on a real-world network flow dataset and a 210,000-trace synthetic multi-intent cryptographic dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security monitoring systems typically treat anomaly detection as identifying statistical deviations from observed data distributions. In cryptographic traffic analysis, however, violations are defined not by rarity but by explicit policy constraints, including key reuse prohibition, downgrade prevention, and bounded key lifetimes. This fundamental mismatch limits the interpretability and adaptability of conventional anomaly detection methods. We introduce INTACT (INTent-Aware Cryptographic Traffic), a policy-conditioned framework that reformulates violation detection as conditional constraint learning. Instead of learning a static decision boundary over behavioral features, INTACT models the probability of violation conditioned on both observed behavior and declared security intent. The architecture factorizes representation learning into behavioral and intent encoders whose fused embeddings produce a violation score, yielding a policy-parameterized family of decision boundaries. We evaluate the framework on a real-world network flow dataset and a 210,000-trace synthetic multi-intent cryptographic dataset. INTACT matches or exceeds strong unsupervised and supervised baselines, achieving near-perfect discrimination (AUROC up to 1.0000) in the real dataset and consistent superiority in detecting relational and composite violations in the synthetic setting. These results demonstrate that explicit intent conditioning improves discrimination, interpretability, and robustness in cryptographic monitoring.
Related papers
- A Real-Time Privacy-Preserving Behavior Recognition System via Edge-Cloud Collaboration [45.24567063896216]
Traditional RGB surveillance raises significant concerns regarding visual recording and storage.<n>Existing privacy-preserving methods compromise semantic understanding capabilities or fail to guarantee mathematical irreversibility against reconstruction attacks.<n>This study presents a novel privacy-preserving perception technology based on the AI Flow theoretical framework and an edge-cloud collaborative architecture.
arXiv Detail & Related papers (2026-01-30T12:55:36Z) - On the Adversarial Robustness of Learning-based Conformal Novelty Detection [10.58528988397402]
We study the adversarial robustness of conformal novelty detection using AdaDetect.<n>Our results show that adversarial perturbations can significantly increase the FDR while maintaining high detection power.
arXiv Detail & Related papers (2025-10-01T03:29:11Z) - Generative Modeling and Decision Fusion for Unknown Event Detection and Classification Using Synchrophasor Data [9.871276314615447]
This paper proposes a novel framework that integrates generative modeling, sliding-window temporal processing, and decision fusion to achieve robust event detection and classification.<n> Experimental results demonstrate state-of-the-art accuracy, surpassing machine learning, deep learning, and envelope-based baselines.
arXiv Detail & Related papers (2025-09-26T18:04:03Z) - DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [70.77570343385928]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - Verification-Guided Falsification for Safe RL via Explainable Abstraction and Risk-Aware Exploration [8.246285288584625]
We propose a hybrid framework that integrates explainability, model checking, and risk-guided falsification to achieve both rigor and coverage.<n>Our approach begins by constructing a human-interpretable abstraction of the RL policy using Comprehensible Abstract Policy Summarization (CAPS)<n>If no violation is detected, we cannot conclude satisfaction due to potential limitation in the abstraction and coverage of the offline dataset.
arXiv Detail & Related papers (2025-06-04T00:54:01Z) - Rethinking Contrastive Learning in Graph Anomaly Detection: A Clean-View Perspective [54.605073936695575]
Graph anomaly detection aims to identify unusual patterns in graph-based data, with wide applications in fields such as web security and financial fraud detection.<n>Existing methods rely on contrastive learning, assuming that a lower similarity between a node and its local subgraph indicates abnormality.<n>The presence of interfering edges invalidates this assumption, since it introduces disruptive noise that compromises the contrastive learning process.<n>We propose a Clean-View Enhanced Graph Anomaly Detection framework (CVGAD), which includes a multi-scale anomaly awareness module to identify key sources of interference in the contrastive learning process.
arXiv Detail & Related papers (2025-05-23T15:05:56Z) - Hierarchical Entropy Disruption for Ransomware Detection: A Computationally-Driven Framework [0.0]
Monitoring entropy variations offers an alternative approach to identifying unauthorized data modifications.<n>A framework leveraging hierarchical entropy disruption was introduced to analyze deviations in entropy distributions.<n> evaluating the framework across multiple ransomware variants demonstrated its capability to achieve high detection accuracy.
arXiv Detail & Related papers (2025-02-12T23:29:06Z) - Semantic Entanglement-Based Ransomware Detection via Probabilistic Latent Encryption Mapping [0.0]
Probabilistic Latent Encryption Mapping models encryption behaviors through statistical representations of entropy deviations and probabilistic dependencies in execution traces.<n> Evaluations demonstrate that entropy-driven classification reduces false positive rates while maintaining high detection accuracy across diverse ransomware families and encryption methodologies.<n>The ability to systematically infer encryption-induced deviations without requiring static attack signatures strengthens detection against adversarial evasion techniques.
arXiv Detail & Related papers (2025-02-04T21:27:58Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.