BIDO: A Unified Approach to Address Obfuscation and Concept Drift Challenges in Image-based Malware Detection
- URL: http://arxiv.org/abs/2509.03807v1
- Date: Thu, 04 Sep 2025 01:48:03 GMT
- Title: BIDO: A Unified Approach to Address Obfuscation and Concept Drift Challenges in Image-based Malware Detection
- Authors: Junhui Li, Chengbin Feng, Zhiwei Yang, Qi Mo, Wei Wang,
- Abstract summary: BIDO is a hybrid image-based malware detector designed to enhance robustness against both obfuscation and concept drift simultaneously.<n> Specifically, to improve the discriminative power of image features, we introduce a local feature selection module.<n>Third, to ensure feature compactness, we design a learnable metric that pulls samples with identical labels closer.
- Score: 15.388728305777908
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To identify malicious Android applications, various malware detection techniques have been proposed. Among them, image-based approaches are considered potential alternatives due to their efficiency and scalability. Recent studies have reported that these approaches suffer significant performance declines when confronted with obfuscation or concept drift. However, existing solutions often treat these two challenges as different problems, offering independent solutions. These techniques overlook the fact that both challenges share a common statistical root, out-of-distribution, and research from this perspective remains limited. In response, we propose BIDO, a hybrid image-based malware detector designed to enhance robustness against both obfuscation and concept drift simultaneously. Specifically, to improve the discriminative power of image features, we introduce a local feature selection module that identifies informative subregions within malware images. Second, to enhance feature robustness, we model pairwise cross-modal dependencies in an outer product space, enabling the extraction of stable co-occurrence patterns. Third, to ensure feature compactness, we design a learnable metric that pulls samples with identical labels closer while pushing apart those with different labels, regardless of obfuscation or concept drift. Extensive experiments on the real-world datasets demonstrate that BIDO significantly outperforms existing baselines, achieving higher robustness against both concept drift and obfuscation. The source code is available at: https://github.com/whatishope/BIDO/.
Related papers
- Diversity over Uniformity: Rethinking Representation in Generated Image Detection [22.020742109848317]
We argue that reliably generated image detection should not depend on a single decision path but should preserve multiple judgment perspectives.<n>We propose an anti-feature-collapse learning framework that filters task-irrelevant components and suppresses excessive overlap among different forgery cues in the representation space.<n>This design maintains diverse and complementary evidence within the model, reduces reliance on a small set of salient cues, and enhances robustness under unseen generative settings.
arXiv Detail & Related papers (2026-02-28T15:42:12Z) - DMLDroid: Deep Multimodal Fusion Framework for Android Malware Detection with Resilience to Code Obfuscation and Adversarial Perturbations [1.3792857294744785]
We propose DMLDroid, an Android malware detection based on multimodal fusion.<n>We conduct exhaustive experiments independently on each feature, as well as in combination, using different fusion strategies.<n>Our findings highlight the benefits of multimodal fusion in improving both detection accuracy and robustness against evolving Android malware threats.
arXiv Detail & Related papers (2025-09-14T09:32:27Z) - Image Can Bring Your Memory Back: A Novel Multi-Modal Guided Attack against Image Generation Model Unlearning [28.15997901023315]
Recall is a novel adversarial framework designed to compromise the robustness of unlearned IGMs.<n>It consistently outperforms existing baselines in terms of adversarial effectiveness, computational efficiency, and semantic fidelity with the original prompt.<n>These findings reveal critical vulnerabilities in current unlearning mechanisms and underscore the need for more robust solutions.
arXiv Detail & Related papers (2025-07-09T02:59:01Z) - UGG-ReID: Uncertainty-Guided Graph Model for Multi-Modal Object Re-Identification [26.770271366177603]
We propose a robust approach named Uncertainty-Guided Graph model for multi-modal object ReID (UGG-ReID)<n>UGG-ReID is designed to mitigate noise interference and facilitate effective multi-modal fusion.<n> Experimental results show that the proposed method achieves excellent performance on all datasets.
arXiv Detail & Related papers (2025-07-07T03:41:08Z) - Deep Learning Fusion For Effective Malware Detection: Leveraging Visual Features [12.431734971186673]
We investigate the power of fusing Convolutional Neural Network models trained on different modalities of a malware executable.
We are proposing a novel multimodal fusion algorithm, leveraging three different visual malware features.
The proposed strategy has a detection rate of 1.00 (on a scale of 0-1) in identifying malware in the given dataset.
arXiv Detail & Related papers (2024-05-23T08:32:40Z) - Latent Diffusion Models for Attribute-Preserving Image Anonymization [4.080920304681247]
This paper presents the first approach to image anonymization based on Latent Diffusion Models (LDMs)
We propose two LDMs for this purpose: CAFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images.
arXiv Detail & Related papers (2024-03-21T19:09:21Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
Cross-modality person re-identification (ReID) systems are based on RGB images.<n>Main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.<n>Existing attack methods have primarily focused on the characteristics of the visible image modality.<n>This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.