MIRAGE: Multi-Binary Image Risk Assessment with Attack Graph Employment
- URL: http://arxiv.org/abs/2311.03565v1
- Date: Mon, 6 Nov 2023 22:07:04 GMT
- Title: MIRAGE: Multi-Binary Image Risk Assessment with Attack Graph Employment
- Authors: David Tayouri, Telem Nachum, Asaf Shabtai,
- Abstract summary: An attack graph (AG) can be used to assess and visually display firmware's risks.
We propose MIRAGE, a framework for identifying potential attack vectors and vulnerable interactions between firmware binaries.
- Score: 10.363703258465407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attackers can exploit known vulnerabilities to infiltrate a device's firmware and the communication between firmware binaries, in order to pass between them. To improve cybersecurity, organizations must identify and mitigate the risks of the firmware they use. An attack graph (AG) can be used to assess and visually display firmware's risks by organizing the identified vulnerabilities into attack paths composed of sequences of actions attackers may perform to compromise firmware images. In this paper, we utilize AGs for firmware risk assessment. We propose MIRAGE (Multi-binary Image Risk Assessment with Attack Graph Employment), a framework for identifying potential attack vectors and vulnerable interactions between firmware binaries; MIRAGE accomplishes this by generating AGs for firmware inter-binary communication. The use cases of the proposed firmware AG generation framework include the identification of risky external interactions, supply chain risk assessment, and security analysis with digital twins. To evaluate the MIRAGE framework, we collected a dataset of 703 firmware images. We also propose a model for examining the risks of firmware binaries, demonstrate the model's implementation on the dataset of firmware images, and list the riskiest binaries.
Related papers
- VII: Visual Instruction Injection for Jailbreaking Image-to-Video Generation Models [57.128876964730644]
Image-to-Video (I2V) generation models, which condition video generation on reference images, have shown emerging visual instruction-following capability.<n>We propose Visual Instruction Injection (VII), a training-free and transferable jailbreaking framework that disguises the malicious intent of unsafe text prompts as benign visual instructions in the safe reference image.<n>VII achieves Attack Success Rates of up to 83.5% while reducing Refusal Rates to near zero, significantly outperforming existing baselines.
arXiv Detail & Related papers (2026-02-24T15:20:01Z) - ORCA -- An Automated Threat Analysis Pipeline for O-RAN Continuous Development [57.61878484176942]
Open-Radio Access Network (O-RAN) integrates numerous software components in a cloud-like deployment, opening the radio access network to previously unconsidered security threats.<n>Current vulnerability assessment practices often rely on manual, labor-intensive, and subjective investigations, leading to inconsistencies in the threat analysis.<n>We propose an automated pipeline that leverages Natural Language Processing (NLP) to minimize human intervention and associated biases.
arXiv Detail & Related papers (2026-01-20T07:31:59Z) - Automated SBOM-Driven Vulnerability Triage for IoT Firmware: A Lightweight Pipeline for Risk Prioritization [0.0]
This paper presents a lightweight, automated pipeline designed to extract file systems from Linux-based IoT firmware.<n>It generates a comprehensive Software Bill of Materials, map identified components to known vulnerabilities, and apply a multi-factor triage scoring model.<n>We describe the architecture, the normalization challenges of embedded Linux, and a scoring methodology intended to reduce alert fatigue.
arXiv Detail & Related papers (2026-01-04T00:09:01Z) - AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework [31.261980405052938]
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and governance documentation.<n>This paper introduces the AI Risk Scanning (AIRS) Framework, a threat-model-based, evidence-generating framework designed to operationalize AI assurance.
arXiv Detail & Related papers (2025-11-16T16:10:38Z) - Breaking SafetyCore: Exploring the Risks of On-Device AI Deployment [0.0]
An increasing number of AI models are deployed on-device.<n>This shift enhances privacy and reduces latency, but also introduces security risks distinct from traditional software.<n>In this article, we examine these risks through the real-world case study of SafetyCore, an Android system service incorporating sensitive image content detection.
arXiv Detail & Related papers (2025-09-08T06:53:13Z) - AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions [76.74726258534142]
We propose AGENTSAFE, the first benchmark for evaluating the safety of embodied VLM agents under hazardous instructions.<n> AGENTSAFE simulates realistic agent-environment interactions within a simulation sandbox.<n> benchmark includes 45 adversarial scenarios, 1,350 hazardous tasks, and 8,100 hazardous instructions.
arXiv Detail & Related papers (2025-06-17T16:37:35Z) - Enhanced Consistency Bi-directional GAN(CBiGAN) for Malware Anomaly Detection [0.25163931116642785]
This paper introduces the application of the CBiGAN in the domain of malware anomaly detection.<n>We utilize several datasets including both portable executable (PE) files as well as Object Linking and Embedding (OLE) files.<n>We then evaluated our model against a diverse set of both PE and OLE files, including self-collected malicious executables from 214 malware families.
arXiv Detail & Related papers (2025-06-09T02:43:25Z) - A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories [9.095642871258455]
We present the first comprehensive study of malicious configurations on Hugging Face.<n>In particular, configuration files originally intended to set up models can be exploited to execute unauthorized code.
arXiv Detail & Related papers (2025-05-02T07:16:20Z) - Malware Detection in Docker Containers: An Image is Worth a Thousand Logs [10.132362061193954]
We introduce a method to identify compromised containers through machine learning analysis of their file systems.
We cast the entire software containers into large RGB images via their tarball representations, and propose to use established Convolutional Neural Network architectures on a streaming, patch-based manner.
Our method detects more malware and achieves higher F1 and Recall scores than all individual and ensembles of VirusTotal engines, demonstrating its effectiveness and setting a new standard for identifying malware-compromised software containers.
arXiv Detail & Related papers (2025-04-04T07:38:16Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.
We focus on technical approaches to misuse and misalignment.
We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities [62.474732677086855]
Large language model (LLM) routing has emerged as a crucial strategy for balancing computational costs with performance.
We propose the DSC benchmark: Diverse, Simple, and Categorized, an evaluation framework that categorizes router performance across a broad spectrum of query types.
arXiv Detail & Related papers (2025-03-20T19:52:30Z) - VMGuard: Reputation-Based Incentive Mechanism for Poisoning Attack Detection in Vehicular Metaverse [52.57251742991769]
vehicular Metaverse guard (VMGuard) protects vehicular Metaverse systems from data poisoning attacks.
VMGuard implements a reputation-based incentive mechanism to assess the trustworthiness of participating SIoT devices.
Our system ensures that reliable SIoT devices, previously missclassified, are not barred from participating in future rounds of the market.
arXiv Detail & Related papers (2024-12-05T17:08:20Z) - In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - FirmRCA: Towards Post-Fuzzing Analysis on ARM Embedded Firmware with Efficient Event-based Fault Localization [37.29599884531106]
FirmRCA is a practical fault localization framework tailored specifically for embedded firmware.
We show that FirmRCA can effectively identify the root cause of crashing test cases within the top 10 instructions.
arXiv Detail & Related papers (2024-10-24T07:12:08Z) - SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI [47.11178028457252]
We develop SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks.
For insecure code, we introduce a new methodology for data creation that combines experts with automatic generation.
For cyberattack helpfulness, we construct samples to prompt a model to generate actual attacks, along with dynamic metrics in our environment.
arXiv Detail & Related papers (2024-10-14T21:17:22Z) - The Impact of SBOM Generators on Vulnerability Assessment in Python: A Comparison and a Novel Approach [56.4040698609393]
Software Bill of Materials (SBOM) has been promoted as a tool to increase transparency and verifiability in software composition.
Current SBOM generation tools often suffer from inaccuracies in identifying components and dependencies.
We propose PIP-sbom, a novel pip-inspired solution that addresses their shortcomings.
arXiv Detail & Related papers (2024-09-10T10:12:37Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Automating SBOM Generation with Zero-Shot Semantic Similarity [2.169562514302842]
A Software-Bill-of-Materials (SBOM) is a comprehensive inventory detailing a software application's components and dependencies.
We propose an automated method for generating SBOMs to prevent disastrous supply-chain attacks.
Our test results are compelling, demonstrating the model's strong performance in the zero-shot classification task.
arXiv Detail & Related papers (2024-02-03T18:14:13Z) - Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems [24.953306643091484]
We show how adversarial attacks can degrade confidence in geospatial systems.
We empirically show their threat to remote sensing systems using high-quality SpaceNet datasets.
arXiv Detail & Related papers (2023-12-12T16:05:12Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - A Survey of Machine Learning Algorithms for Detecting Malware in IoT
Firmware [0.0]
This paper employs a number of machine learning algorithms to classify IoT firmware and the best performing models are reported.
Deep learning approaches including Convolutional and Fully Connected Neural Networks are also explored.
arXiv Detail & Related papers (2021-11-03T17:55:51Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.