Efficient Adversarial Malware Defense via Trust-Based Raw Override and Confidence-Adaptive Bit-Depth Reduction
- URL: http://arxiv.org/abs/2511.12827v1
- Date: Sun, 16 Nov 2025 23:21:44 GMT
- Title: Efficient Adversarial Malware Defense via Trust-Based Raw Override and Confidence-Adaptive Bit-Depth Reduction
- Authors: Ayush Chaudhary, Sisir Doppalpudi,
- Abstract summary: Recent advances in adversarial defenses have demonstrated strong robustness improvements.<n> computational overhead ranging from 4x to 22x presents significant challenges for production systems processing millions of samples daily.<n>We propose a novel framework that combines Trust-Adaptive TRO with Confidence-Adaptive Bit-Depth Reduction.<n>Our approach achieves 1.76x computational overhead - a 2.3x improvement over state-of-the-art smoothing defenses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and computational efficiency. While recent advances in adversarial defenses have demonstrated strong robustness improvements, they often introduce computational overhead ranging from 4x to 22x, which presents significant challenges for production systems processing millions of samples daily. In this work, we propose a novel framework that combines Trust-Raw Override (TRO) with Confidence-Adaptive Bit-Depth Reduction (CABDR) to explicitly optimize the trade-off between adversarial robustness and computational efficiency. Our approach leverages adaptive confidence-based mechanisms to selectively apply defensive measures, achieving 1.76x computational overhead - a 2.3x improvement over state-of-the-art smoothing defenses. Through comprehensive evaluation on the EMBER v2 dataset comprising 800K samples, we demonstrate that our framework maintains 91 percent clean accuracy while reducing attack success rates to 31-37 percent across multiple attack types, with particularly strong performance against optimization-based attacks such as C and W (48.8 percent reduction). The framework achieves throughput of up to 1.26 million samples per second (measured on pre-extracted EMBER features with no runtime feature extraction), validated across 72 production configurations with statistical significance (5 independent runs, 95 percent confidence intervals, p less than 0.01). Our results suggest that practical adversarial robustness in production environments requires explicit optimization of the efficiency-robustness trade-off, providing a viable path for organizations to deploy robust defenses without prohibitive infrastructure costs.
Related papers
- Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline [1.2802720336459552]
Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems.<n>We present an efficient and systematically evaluated defense architecture that mitigates these threats through a lightweight, multi-stage pipeline.
arXiv Detail & Related papers (2025-12-22T04:00:35Z) - MicroProbe: Efficient Reliability Assessment for Foundation Models with Minimal Data [0.0]
microprobe achieves comprehensive reliability assessment using only 100 strategically selected probe examples.<n>We demonstrate that microprobe achieves 23.5% higher composite reliability scores compared to random sampling baselines.<n> microprobe completes reliability assessment with 99.9% statistical power while representing a 90% reduction in assessment cost and maintaining 95% of traditional method coverage.
arXiv Detail & Related papers (2025-11-30T13:01:57Z) - Detecting and Preventing Latent Risk Accumulation in High-Performance Software Systems [0.0]
Cache achieving fragility hit rates can obscure bottlenecks until cache failures trigger 100x load amplification and 99% cascading collapse.<n>Current reliability engineering focuses on reactive incident response rather than proactive detection of optimization-induced vulnerabilities.<n>This paper presents the first comprehensive framework for systematic latent risk detection, prevention, and optimization.
arXiv Detail & Related papers (2025-10-04T07:22:39Z) - Efficient Private Inference Based on Helper-Assisted Malicious Security Dishonest Majority MPC [5.797285315996385]
We propose a novel, three-layer private inference framework based on the Helper-Assisted MSDM model.<n>The framework achieves up to a 2.4-25.7x speedup in LAN and a 1.3-9.5x acceleration in WAN over the state-of-the-art MSDM frameworks.
arXiv Detail & Related papers (2025-07-13T12:24:02Z) - Sampling-aware Adversarial Attacks Against Large Language Models [52.30089653615172]
Existing adversarial attacks typically target harmful responses in single-point greedy generations.<n>We show that for the goal of eliciting harmful responses, repeated sampling of model outputs during the attack prompt optimization.<n>We show that integrating sampling into existing attacks boosts success rates by up to 37% and improves efficiency by up to two orders of magnitude.
arXiv Detail & Related papers (2025-07-06T16:13:33Z) - T2V-OptJail: Discrete Prompt Optimization for Text-to-Video Jailbreak Attacks [67.91652526657599]
We formalize the T2V jailbreak attack as a discrete optimization problem and propose a joint objective-based optimization framework, called T2V-OptJail.<n>We conduct large-scale experiments on several T2V models, covering both open-source models and real commercial closed-source models.<n>The proposed method improves 11.4% and 10.0% over the existing state-of-the-art method in terms of attack success rate.
arXiv Detail & Related papers (2025-05-10T16:04:52Z) - AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security [74.22452069013289]
AegisLLM is a cooperative multi-agent defense against adversarial attacks and information leakage.<n>We show that scaling agentic reasoning system at test-time substantially enhances robustness without compromising model utility.<n> Comprehensive evaluations across key threat scenarios, including unlearning and jailbreaking, demonstrate the effectiveness of AegisLLM.
arXiv Detail & Related papers (2025-04-29T17:36:05Z) - Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization [74.78433600288776]
HKVE (Hierarchical Key-Value Equalization) is an innovative jailbreaking framework that selectively accepts gradient optimization results.<n>We show that HKVE substantially outperforms existing methods by substantially outperforming existing methods by margins of 20.43%, 21.01% and 26.43% respectively.
arXiv Detail & Related papers (2025-03-14T17:57:42Z) - Beyond Confidence: Adaptive Abstention in Dual-Threshold Conformal Prediction for Autonomous System Perception [0.4124847249415279]
Safety-critical perception systems require reliable uncertainty quantification and principled abstention mechanisms to maintain safety.<n>We present a novel dual-threshold conformalization framework that provides statistically-guaranteed uncertainty estimates while enabling selective prediction in high-risk scenarios.
arXiv Detail & Related papers (2025-02-11T04:45:31Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - A Data Augmentation-based Defense Method Against Adversarial Attacks in
Neural Networks [7.943024117353317]
We develop a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints.
Our model can withstand advanced adaptive attack, namely BPDA with 50 rounds, and still helps the target model maintain an accuracy around 80 %, meanwhile constraining the attack success rate to almost zero.
arXiv Detail & Related papers (2020-07-30T08:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.