SecureLLM: Using Compositionality to Build Provably Secure Language Models for Private, Sensitive, and Secret Data
- URL: http://arxiv.org/abs/2405.09805v2
- Date: Thu, 13 Jun 2024 16:54:51 GMT
- Title: SecureLLM: Using Compositionality to Build Provably Secure Language Models for Private, Sensitive, and Secret Data
- Authors: Abdulrahman Alabdulkareem, Christian M Arnold, Yerim Lee, Pieter M Feenstra, Boris Katz, Andrei Barbu,
- Abstract summary: Traditional security mechanisms isolate resources from users who should not access them.
We reflect the compositional nature of such security mechanisms back into the structure of LLMs to build a provably secure LLM.
SecureLLM blends access security with fine-tuning methods.
We contribute both a difficult new compositional natural-language-to- translation task and a new perspective on LLM security that allows models to be deployed to secure environments today.
- Score: 12.873248205613827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional security mechanisms isolate resources from users who should not access them. We reflect the compositional nature of such security mechanisms back into the structure of LLMs to build a provably secure LLM; that we term SecureLLM. Other approaches to LLM safety attempt to protect against bad actors or bad outcomes, but can only do so to an extent making them inappropriate for sensitive data. SecureLLM blends access security with fine-tuning methods. Each data silo has associated with it a separate fine-tuning and a user has access only to the collection of fine-tunings that they have permission for. The model must then perform on compositional tasks at the intersection of those data silos with the combination of those individual fine-tunings. While applicable to any task like document QA or making API calls, in this work we concern ourselves with models that learn the layouts of new SQL databases to provide natural-language-to-SQL translation capabilities. Existing fine-tuning composition methods fail in this challenging environment, as they are not well-equipped for handling compositional tasks. Compositionality remains a challenge for LLMs. We contribute both a difficult new compositional natural-language-to-SQL translation task and a new perspective on LLM security that allows models to be deployed to secure environments today.
Related papers
- LEGO: Language Model Building Blocks [1.6124402884077915]
Large language models (LLMs) are essential in natural language processing (NLP) but are costly in data collection, pre-training, fine-tuning, and inference.
This paper proposes LEGO, a novel technique to extract SLMs from an LLM and recombine them.
Using state-of-the-art LLM pruning strategies, we can create task- and user-specific SLM building blocks that are efficient for fine-tuning and inference while also preserving user data privacy.
arXiv Detail & Related papers (2024-10-23T21:31:42Z) - CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration [90.36429361299807]
multimodal large language models (MLLMs) have demonstrated remarkable success in engaging in conversations involving visual inputs.
The integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs.
We introduce a technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution.
arXiv Detail & Related papers (2024-09-17T17:14:41Z) - Relational Database Augmented Large Language Model [59.38841050766026]
Large language models (LLMs) excel in many natural language processing (NLP) tasks.
They can only incorporate new knowledge through training or supervised fine-tuning processes.
This precise, up-to-date, and private information is typically stored in relational databases.
arXiv Detail & Related papers (2024-07-21T06:19:10Z) - Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation [98.02846901473697]
We propose ECSO (Eyes Closed, Safety On), a training-free protecting approach that exploits the inherent safety awareness of MLLMs.
ECSO generates safer responses via adaptively transforming unsafe images into texts to activate the intrinsic safety mechanism of pre-aligned LLMs.
arXiv Detail & Related papers (2024-03-14T17:03:04Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems [34.92643238651264]
Large language models (LLMs) have begun supporting third-party applications.
These apps leverage the de facto natural language-based automated execution paradigm of LLMs.
Because third-party apps may not be trustworthy, and exacerbated by the imprecision of natural language interfaces, the current designs pose security and privacy risks for users.
arXiv Detail & Related papers (2024-03-08T00:02:30Z) - ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [90.73444232283371]
ShieldLM is a safety detector for Large Language Models (LLMs) that aligns with common safety standards.
We show that ShieldLM surpasses strong baselines across four test sets, showcasing remarkable customizability and explainability.
arXiv Detail & Related papers (2024-02-26T09:43:02Z) - LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks [18.068035947969044]
There is considerable confusion about the role of Large Language Models (LLMs) in planning and reasoning tasks.
We argue that auto-regressive LLMs cannot, by themselves, do planning or self-verification.
We present a vision of bf LLM-Modulo Frameworks that combine the strengths of LLMs with external model-based verifiers.
arXiv Detail & Related papers (2024-02-02T14:43:18Z) - MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance [36.03512474289962]
This paper investigates the novel challenge of defending MLLMs against malicious attacks through visual inputs.
Images act as a foreign language" that is not considered during safety alignment, making MLLMs more prone to producing harmful responses.
We introduce MLLM-Protector, a plug-and-play strategy that solves two subtasks: 1) identifying harmful responses via a lightweight harm detector, and 2) transforming harmful responses into harmless ones via a detoxifier.
arXiv Detail & Related papers (2024-01-05T17:05:42Z) - Mutual Enhancement of Large and Small Language Models with Cross-Silo
Knowledge Transfer [27.63746419563747]
Large language models (LLMs) are empowered with broad knowledge, but their task-specific performance is often suboptimal.
It necessitates fine-tuning LLMs with task-specific data, but such data may be inaccessible due to privacy concerns.
We propose a novel approach to enhance LLMs with smaller language models (SLMs) that are trained on clients using their private task-specific data.
arXiv Detail & Related papers (2023-12-10T09:52:32Z) - Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs [59.596335292426105]
This paper collects the first open-source dataset to evaluate safeguards in large language models.
We train several BERT-like classifiers to achieve results comparable with GPT-4 on automatic safety evaluation.
arXiv Detail & Related papers (2023-08-25T14:02:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.