FhGenie: A Custom, Confidentiality-preserving Chat AI for Corporate and
Scientific Use
- URL: http://arxiv.org/abs/2403.00039v1
- Date: Thu, 29 Feb 2024 09:43:50 GMT
- Title: FhGenie: A Custom, Confidentiality-preserving Chat AI for Corporate and
Scientific Use
- Authors: Ingo Weber, Hendrik Linka, Daniel Mertens, Tamara Muryshkin, Heinrich
Opgenoorth, Stefan Langer
- Abstract summary: We have designed and developed a customized chat AI called FhGenie.
Within few days of its release, thousands of Fraunhofer employees started using this service.
We discuss challenges, observations, and the core lessons learned from its productive usage.
- Score: 2.927166196773183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since OpenAI's release of ChatGPT, generative AI has received significant
attention across various domains. These AI-based chat systems have the
potential to enhance the productivity of knowledge workers in diverse tasks.
However, the use of free public services poses a risk of data leakage, as
service providers may exploit user input for additional training and
optimization without clear boundaries. Even subscription-based alternatives
sometimes lack transparency in handling user data. To address these concerns
and enable Fraunhofer staff to leverage this technology while ensuring
confidentiality, we have designed and developed a customized chat AI called
FhGenie (genie being a reference to a helpful spirit). Within few days of its
release, thousands of Fraunhofer employees started using this service. As
pioneers in implementing such a system, many other organizations have followed
suit. Our solution builds upon commercial large language models (LLMs), which
we have carefully integrated into our system to meet our specific requirements
and compliance constraints, including confidentiality and GDPR. In this paper,
we share detailed insights into the architectural considerations, design,
implementation, and subsequent updates of FhGenie. Additionally, we discuss
challenges, observations, and the core lessons learned from its productive
usage.
Related papers
- A Practical and Privacy-Preserving Framework for Real-World Large Language Model Services [8.309281698695381]
Large language models (LLMs) have demonstrated exceptional capabilities in text understanding and generation.
Individuals often rely on online AI as a Service (AI) provided by LLM companies.
This business model poses significant privacy risks, as service providers may exploit users' trace patterns and behavioral data.
We propose a practical and privacy-preserving framework that ensures user anonymity by preventing service providers from linking requests to the individuals who submit them.
arXiv Detail & Related papers (2024-11-03T07:40:28Z) - KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server [48.04903443425111]
Large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data.
Existing solutions, such as utilizing synthetic data for substitution, struggle to simultaneously improve performance and preserve privacy.
We propose KnowledgeSG, a novel client-server framework which enhances synthetic data quality and improves model performance while ensuring privacy.
arXiv Detail & Related papers (2024-10-08T06:42:28Z) - A Qualitative Study on Using ChatGPT for Software Security: Perception vs. Practicality [1.7624347338410744]
ChatGPT is a Large Language Model (LLM) that can perform a variety of tasks with remarkable semantic understanding and accuracy.
This study aims to gain an understanding of the potential of ChatGPT as an emerging technology for supporting software security.
It was determined that security practitioners view ChatGPT as beneficial for various software security tasks, including vulnerability detection, information retrieval, and penetration testing.
arXiv Detail & Related papers (2024-08-01T10:14:05Z) - Large Language Models Meet User Interfaces: The Case of Provisioning Feedback [6.626949691937476]
We present a framework for incorporating GenAI into educational tools and demonstrate its application in our tool, Feedback Copilot.
This work charts a course for the future of GenAI in education.
arXiv Detail & Related papers (2024-04-17T05:05:05Z) - CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI [5.331052581441263]
CollaFuse is a novel framework inspired by split learning.
It enables shared server training and inference, alleviating client computational burdens.
It has the potential to impact various application areas, such as the design of edge computing solutions, healthcare research, or autonomous driving.
arXiv Detail & Related papers (2024-02-29T12:36:10Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.