Datenschutzkonformer LLM-Einsatz: Eine Open-Source-Referenzarchitektur
- URL: http://arxiv.org/abs/2503.01915v1
- Date: Sat, 01 Mar 2025 14:51:07 GMT
- Title: Datenschutzkonformer LLM-Einsatz: Eine Open-Source-Referenzarchitektur
- Authors: Marian Lambert, Thomas Schuster, Nico Döring, Robin Krüger,
- Abstract summary: We present a reference architecture for developing closed, LLM-based systems using open-source technologies.<n>The architecture provides a flexible and transparent solution that meets strict data privacy and security requirements.
- Score: 0.10713888959520207
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The development of Large Language Models (LLMs) has led to significant advancements in natural language processing and enabled numerous applications across various industries. However, many LLM-based solutions operate as open systems relying on cloud services, which pose risks to data confidentiality and security. To address these challenges, organizations require closed LLM systems that comply with data protection regulations while maintaining high performance. In this paper, we present a reference architecture for developing closed, LLM-based systems using open-source technologies. The architecture provides a flexible and transparent solution that meets strict data privacy and security requirements. We analyze the key challenges in implementing such systems, including computing resources, data management, scalability, and security risks. Additionally, we introduce an evaluation pipeline that enables a systematic assessment of system performance and compliance.
Related papers
- Exploring the Roles of Large Language Models in Reshaping Transportation Systems: A Survey, Framework, and Roadmap [51.198001060683296]
Large Language Models (LLMs) offer transformative potential to address transportation challenges.
This survey first presents LLM4TR, a novel conceptual framework that systematically categorizes the roles of LLMs in transportation.
For each role, our review spans diverse applications, from traffic prediction and autonomous driving to safety analytics and urban mobility optimization.
arXiv Detail & Related papers (2025-03-27T11:56:27Z) - The Next Frontier of LLM Applications: Open Ecosystems and Hardware Synergy [5.667013605202579]
Large Language Model (LLM) applications are shaping the future of AI ecosystems.
This paper envisions the future of LLM applications and proposes a three-layer decoupled architecture.
We highlight key security and privacy challenges for safe, scalable AI deployment.
arXiv Detail & Related papers (2025-03-06T16:38:23Z) - System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective [24.583984374370342]
Large Language Model-based systems (LLM systems) are information and query processing systems.
We present a system-level defense based on the principles of information flow control that we call an f-secure LLM system.
arXiv Detail & Related papers (2024-09-27T18:41:58Z) - LLM-PBE: Assessing Data Privacy in Large Language Models [111.58198436835036]
Large Language Models (LLMs) have become integral to numerous domains, significantly advancing applications in data management, mining, and analysis.
Despite the critical nature of this issue, there has been no existing literature to offer a comprehensive assessment of data privacy risks in LLMs.
Our paper introduces LLM-PBE, a toolkit crafted specifically for the systematic evaluation of data privacy risks in LLMs.
arXiv Detail & Related papers (2024-08-23T01:37:29Z) - A Survey of AIOps for Failure Management in the Era of Large Language Models [60.59720351854515]
This paper presents a comprehensive survey of AIOps technology for failure management in the LLM era.
It includes a detailed definition of AIOps tasks for failure management, the data sources for AIOps, and the LLM-based approaches adopted for AIOps.
arXiv Detail & Related papers (2024-06-17T05:13:24Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - A New Era in LLM Security: Exploring Security Concerns in Real-World
LLM-based Systems [47.18371401090435]
We analyze the security of Large Language Model (LLM) systems, instead of focusing on the individual LLMs.
We propose a multi-layer and multi-step approach and apply it to the state-of-art OpenAI GPT4.
We found that although the OpenAI GPT4 has designed numerous safety constraints to improve its safety features, these safety constraints are still vulnerable to attackers.
arXiv Detail & Related papers (2024-02-28T19:00:12Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Building Guardrails for Large Language Models [19.96292920696796]
Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology.
This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI) and discusses the challenges and the road towards building more complete solutions.
arXiv Detail & Related papers (2024-02-02T16:35:00Z) - Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
Model Systems [29.828997665535336]
Large language models (LLMs) have strong capabilities in solving diverse natural language processing tasks.
However, the safety and security issues of LLM systems have become the major obstacle to their widespread application.
This paper proposes a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system.
arXiv Detail & Related papers (2024-01-11T09:29:56Z) - ChatSOS: LLM-based knowledge Q&A system for safety engineering [0.0]
This study introduces an LLM-based Q&A system for safety engineering, enhancing the comprehension and response accuracy of the model.
We employ prompt engineering to incorporate external knowledge databases, thus enriching the LLM with up-to-date and reliable information.
Our findings indicate that the integration of external knowledge significantly augments the capabilities of LLM for in-depth problem analysis and autonomous task assignment.
arXiv Detail & Related papers (2023-12-14T03:25:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.