Enhancing Productivity with AI During the Development of an ISMS: Case Kempower
- URL: http://arxiv.org/abs/2409.19029v1
- Date: Thu, 26 Sep 2024 20:37:31 GMT
- Title: Enhancing Productivity with AI During the Development of an ISMS: Case Kempower
- Authors: Atro Niemeläinen, Muhammad Waseem, Tommi Mikkonen,
- Abstract summary: This paper discusses how Kempower, a Finnish company, has effectively used generative AI to create and implement an ISMS.
This research studies how the use of generative AI can enhance the process of creating an ISMS.
- Score: 3.94000837747249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Investing in an Information Security Management System (ISMS) enhances organizational competitiveness and protects information assets. However, introducing an ISMS consumes significant resources; for instance, implementing an ISMS according to the ISO27001 standard involves documenting 116 different controls. This paper discusses how Kempower, a Finnish company, has effectively used generative AI to create and implement an ISMS, significantly reducing the resources required. This research studies how the use of generative AI can enhance the process of creating an ISMS. We conducted seven semi-structured interviews held with various stakeholders of the ISMS project, who had varying levels experience in cyber security and AI.
Related papers
- Intelligent Mobile AI-Generated Content Services via Interactive Prompt Engineering and Dynamic Service Provisioning [55.641299901038316]
AI-generated content can organize collaborative Mobile AIGC Service Providers (MASPs) at network edges to provide ubiquitous and customized content for resource-constrained users.
Such a paradigm faces two significant challenges: 1) raw prompts often lead to poor generation quality due to users' lack of experience with specific AIGC models, and 2) static service provisioning fails to efficiently utilize computational and communication resources.
We develop an interactive prompt engineering mechanism that leverages a Large Language Model (LLM) to generate customized prompt corpora and employs Inverse Reinforcement Learning (IRL) for policy imitation.
arXiv Detail & Related papers (2025-02-17T03:05:20Z) - The AI Agent Index [8.48525754659057]
Agentic AI systems can plan and execute complex tasks with limited human involvement.
There is currently no structured framework for documenting the technical components, intended uses, and safety features of agentic systems.
The AI Agent Index is the first public database to document information about currently deployed agentic AI systems.
arXiv Detail & Related papers (2025-02-03T18:59:13Z) - Interplay of ISMS and AIMS in context of the EU AI Act [0.0]
The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems.
This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS)
Four new AI modules are introduced, proposed for inclusion in the BSI Grundschutz framework to comprehensively ensure the security of AI systems.
arXiv Detail & Related papers (2024-12-24T20:13:19Z) - Large Model Based Agents: State-of-the-Art, Cooperation Paradigms, Security and Privacy, and Future Trends [64.57762280003618]
It is foreseeable that in the near future, LM-driven general AI agents will serve as essential tools in production tasks.
This paper investigates scenarios involving the autonomous collaboration of future LM agents.
arXiv Detail & Related papers (2024-09-22T14:09:49Z) - ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems [80.69865295743149]
This work attempts to study using LLM-based agents to design collaborative AI systems autonomously.
Based on ComfyBench, we develop ComfyAgent, a framework that empowers agents to autonomously design collaborative AI systems by generating.
While ComfyAgent achieves a comparable resolve rate to o1-preview and significantly surpasses other agents on ComfyBench, ComfyAgent has resolved only 15% of creative tasks.
arXiv Detail & Related papers (2024-09-02T17:44:10Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Design of a Quality Management System based on the EU Artificial Intelligence Act [0.0]
The EU AI Act mandates that providers and deployers of high-risk AI systems establish a quality management system (QMS)
This paper introduces a new design concept and prototype for a QMS as a microservice Software as a Service web application.
arXiv Detail & Related papers (2024-08-08T12:14:02Z) - Navigating the EU AI Act: A Methodological Approach to Compliance for Safety-critical Products [0.0]
This paper presents a methodology for interpreting the EU AI Act requirements for high-risk AI systems.
We first propose an extended product quality model for AI systems, incorporating attributes relevant to the Act not covered by current quality models.
We then propose a contract-based approach to derive technical requirements at the stakeholder level.
arXiv Detail & Related papers (2024-03-25T14:32:18Z) - APPRAISE: a governance framework for innovation with AI systems [0.0]
The EU Artificial Intelligence Act (AIA) is the first serious legislative attempt to contain the harmful effects of AI systems.
This paper proposes a governance framework for AI innovation.
The framework bridges the gap between strategic variables and responsible value creation.
arXiv Detail & Related papers (2023-09-26T12:20:07Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.