Adversarial Robustness of Traffic Classification under Resource Constraints: Input Structure Matters
- URL: http://arxiv.org/abs/2512.02276v1
- Date: Mon, 01 Dec 2025 23:47:22 GMT
- Title: Adversarial Robustness of Traffic Classification under Resource Constraints: Input Structure Matters
- Authors: Adel Chehade, Edoardo Ragusa, Paolo Gastaldo, Rodolfo Zunino,
- Abstract summary: Traffic classification (TC) plays a critical role in cybersecurity, particularly in IoT and embedded contexts.<n>We use hardware-aware neural architecture search (HW-NAS) to derive lightweight TC models that are accurate, efficient, and deployable on edge platforms.
- Score: 2.4779082385578337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traffic classification (TC) plays a critical role in cybersecurity, particularly in IoT and embedded contexts, where inspection must often occur locally under tight hardware constraints. We use hardware-aware neural architecture search (HW-NAS) to derive lightweight TC models that are accurate, efficient, and deployable on edge platforms. Two input formats are considered: a flattened byte sequence and a 2D packet-wise time series; we examine how input structure affects adversarial vulnerability when using resource-constrained models. Robustness is assessed against white-box attacks, specifically Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). On USTC-TFC2016, both HW-NAS models achieve over 99% clean-data accuracy while remaining within 65k parameters and 2M FLOPs. Yet under perturbations of strength 0.1, their robustness diverges: the flat model retains over 85% accuracy, while the time-series variant drops below 35%. Adversarial fine-tuning delivers robust gains, with flat-input accuracy exceeding 96% and the time-series variant recovering over 60 percentage points in robustness, all without compromising efficiency. The results underscore how input structure influences adversarial vulnerability, and show that even compact, resource-efficient models can attain strong robustness, supporting their practical deployment in secure edge-based TC.
Related papers
- The Drill-Down and Fabricate Test (DDFT): A Protocol for Measuring Epistemic Robustness in Language Models [0.0]
Current language model evaluations measure what models know under ideal conditions but not how robustly they know it under realistic stress.<n>We introduce the Drill-Down Fabricate Test (DDFT), a protocol that measures robustness.<n>We find flagship models exhibit brittleness despite their scale, while smaller models can achieve robust performance.
arXiv Detail & Related papers (2025-12-29T20:29:09Z) - On the Robustness Tradeoff in Fine-Tuning [5.95580243805785]
We evaluate the robustness and accuracy of fine-tuned models over 6 benchmark datasets.<n>We observe a consistent trade-off between adversarial robustness and accuracy.<n>These insights emphasize the need for robustness-aware fine-tuning to ensure reliable real-world deployments.
arXiv Detail & Related papers (2025-03-19T02:35:01Z) - Hybrid-Segmentor: A Hybrid Approach to Automated Fine-Grained Crack Segmentation in Civil Infrastructure [52.2025114590481]
We introduce Hybrid-Segmentor, an encoder-decoder based approach that is capable of extracting both fine-grained local and global crack features.
This allows the model to improve its generalization capabilities in distinguish various type of shapes, surfaces and sizes of cracks.
The proposed model outperforms existing benchmark models across 5 quantitative metrics (accuracy 0.971, precision 0.804, recall 0.744, F1-score 0.770, and IoU score 0.630), achieving state-of-the-art status.
arXiv Detail & Related papers (2024-09-04T16:47:16Z) - Structural damage detection via hierarchical damage information with volumetric assessment [1.4470320778878742]
Structural health monitoring (SHM) is essential for ensuring the safety and longevity of infrastructure.<n>This study introduces the Guided Detection Network (Guided-DetNet), a framework designed to address these challenges.<n>Guided-DetNet is characterized by a Generative Attention Module (GAM), Hierarchical Elimination Algorithm (HEA), and Volumetric Contour Visual Assessment (VCVA)
arXiv Detail & Related papers (2024-07-29T04:33:04Z) - PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses [46.098482151215556]
State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.
This impressive performance typically comes at the cost of 10-100x more inference-time computation compared to undefended models.
We propose a defense framework named PatchCURE to approach this trade-off problem.
arXiv Detail & Related papers (2023-10-19T18:14:33Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - VeriCompress: A Tool to Streamline the Synthesis of Verified Robust
Compressed Neural Networks from Scratch [10.061078548888567]
AI's widespread integration has led to neural networks (NNs) deployment on edge and similar limited-resource platforms for safety-critical scenarios.
This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees.
The method trains models 2-3 times faster than the state-of-the-art approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively.
arXiv Detail & Related papers (2022-11-17T23:42:10Z) - Efficient Certified Defenses Against Patch Attacks on Image Classifiers [13.858624044986815]
BagCert is a novel combination of model architecture and certification procedure that allows efficient certification.
On CIFAR10, BagCert certifies examples in 43 seconds on a single GPU and obtains 86% clean and 60% certified accuracy against 5x5 patches.
arXiv Detail & Related papers (2021-02-08T12:11:41Z) - To be Robust or to be Fair: Towards Fairness in Adversarial Training [83.42241071662897]
We find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.
We propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses.
arXiv Detail & Related papers (2020-10-13T02:21:54Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z) - Robust Encodings: A Framework for Combating Adversarial Typos [85.70270979772388]
NLP systems are easily fooled by small perturbations of inputs.
Existing procedures to defend against such perturbations provide guaranteed robustness to worst-case attacks.
We introduce robust encodings (RobEn) that confer guaranteed robustness without making compromises on model architecture.
arXiv Detail & Related papers (2020-05-04T01:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.