LLM for SoC Security: A Paradigm Shift
- URL: http://arxiv.org/abs/2310.06046v1
- Date: Mon, 9 Oct 2023 18:02:38 GMT
- Title: LLM for SoC Security: A Paradigm Shift
- Authors: Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo
Zhou, Mark Tehranipoor, Farimah Farahmandi
- Abstract summary: Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks.
This paper offers an in-depth analysis of existing works, showcases practical case studies, demonstrates comprehensive experiments, and provides useful promoting guidelines.
- Score: 10.538841854672786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the ubiquity and complexity of system-on-chip (SoC) designs increase
across electronic devices, the task of incorporating security into an SoC
design flow poses significant challenges. Existing security solutions are
inadequate to provide effective verification of modern SoC designs due to their
limitations in scalability, comprehensiveness, and adaptability. On the other
hand, Large Language Models (LLMs) are celebrated for their remarkable success
in natural language understanding, advanced reasoning, and program synthesis
tasks. Recognizing an opportunity, our research delves into leveraging the
emergent capabilities of Generative Pre-trained Transformers (GPTs) to address
the existing gaps in SoC security, aiming for a more efficient, scalable, and
adaptable methodology. By integrating LLMs into the SoC security verification
paradigm, we open a new frontier of possibilities and challenges to ensure the
security of increasingly complex SoCs. This paper offers an in-depth analysis
of existing works, showcases practical case studies, demonstrates comprehensive
experiments, and provides useful promoting guidelines. We also present the
achievements, prospects, and challenges of employing LLM in different SoC
security verification tasks.
Related papers
- Enhancing Enterprise Security with Zero Trust Architecture [0.0]
Zero Trust Architecture (ZTA) represents a transformative approach to modern cybersecurity.
ZTA shifts the security paradigm by assuming that no user, device, or system can be trusted by default.
This paper explores the key components of ZTA, such as identity and access management (IAM), micro-segmentation, continuous monitoring, and behavioral analytics.
arXiv Detail & Related papers (2024-10-23T21:53:16Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Sok: Comprehensive Security Overview, Challenges, and Future Directions of Voice-Controlled Systems [10.86045604075024]
The integration of Voice Control Systems into smart devices accentuates the importance of their security.
Current research has uncovered numerous vulnerabilities in VCS, presenting significant risks to user privacy and security.
This study introduces a hierarchical model structure for VCS, providing a novel lens for categorizing and analyzing existing literature in a systematic manner.
We classify attacks based on their technical principles and thoroughly evaluate various attributes, such as their methods, targets, vectors, and behaviors.
arXiv Detail & Related papers (2024-05-27T12:18:46Z) - Evolutionary Large Language Models for Hardware Security: A Comparative Survey [0.4642370358223669]
This study explores the seeds of Large Language Models (LLMs) integration in register transfer level (RTL) designs.
LLMs can be harnessed to automatically rectify security-relevant vulnerabilities inherent in HW designs.
arXiv Detail & Related papers (2024-04-25T14:42:12Z) - SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models [107.82336341926134]
SALAD-Bench is a safety benchmark specifically designed for evaluating Large Language Models (LLMs)
It transcends conventional benchmarks through its large scale, rich diversity, intricate taxonomy spanning three levels, and versatile functionalities.
arXiv Detail & Related papers (2024-02-07T17:33:54Z) - Building Guardrails for Large Language Models [19.96292920696796]
Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology.
This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI) and discusses the challenges and the road towards building more complete solutions.
arXiv Detail & Related papers (2024-02-02T16:35:00Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs [11.853500347907826]
Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution.
This paper presents an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster research, testing, and evaluation of the cybersecurity of C-ITSs.
arXiv Detail & Related papers (2023-12-22T13:42:53Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.