Toward Automated Potential Primary Asset Identification in Verilog Designs
- URL: http://arxiv.org/abs/2502.04648v1
- Date: Fri, 07 Feb 2025 04:17:25 GMT
- Title: Toward Automated Potential Primary Asset Identification in Verilog Designs
- Authors: Subroto Kumer Deb Nath, Benjamin Tan,
- Abstract summary: Knowing the security assets in a design is fundamental to downstream security analyses.<n>This paper proposes an automated approach for the initial identification of potential security assets in a Verilog design.
- Score: 4.526103806673449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With greater design complexity, the challenge to anticipate and mitigate security issues provides more responsibility for the designer. As hardware provides the foundation of a secure system, we need tools and techniques that support engineers to improve trust and help them address security concerns. Knowing the security assets in a design is fundamental to downstream security analyses, such as threat modeling, weakness identification, and verification. This paper proposes an automated approach for the initial identification of potential security assets in a Verilog design. Taking inspiration from manual asset identification methodologies, we analyze open-source hardware designs in three IP families and identify patterns and commonalities likely to indicate structural assets. Through iterative refinement, we provide a potential set of primary security assets and thus help to reduce the manual search space.
Related papers
- Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.
We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.
As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification [0.0]
This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity.
We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps.
The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems.
arXiv Detail & Related papers (2025-03-27T11:41:53Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Autonomous Identity-Based Threat Segmentation in Zero Trust Architectures [4.169915659794567]
Zero Trust Architectures (ZTA) fundamentally redefine network security by adopting a "trust nothing, verify everything" approach.<n>This research applies the proposed AI-driven, autonomous, identity-based threat segmentation in ZTA.
arXiv Detail & Related papers (2025-01-10T15:35:02Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Model-Driven Security Analysis of Self-Sovereign Identity Systems [2.5475486924467075]
We propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems.
Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic.
We present typical vulnerable patterns verified by SecureSSI.
arXiv Detail & Related papers (2024-06-02T05:44:32Z) - SoK: A Defense-Oriented Evaluation of Software Supply Chain Security [3.165193382160046]
We argue that the next stage of software supply chain security research and development will benefit greatly from a defense-oriented approach.
This paper introduces the AStRA model, a framework for representing fundamental software supply chain elements and their causal relationships.
arXiv Detail & Related papers (2024-05-23T18:53:48Z) - Evolutionary Large Language Models for Hardware Security: A Comparative Survey [0.4642370358223669]
This study explores the seeds of Large Language Models (LLMs) integration in register transfer level (RTL) designs.
LLMs can be harnessed to automatically rectify security-relevant vulnerabilities inherent in HW designs.
arXiv Detail & Related papers (2024-04-25T14:42:12Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.