Governance of Generative Artificial Intelligence for Companies
- URL: http://arxiv.org/abs/2403.08802v2
- Date: Sun, 9 Jun 2024 19:48:05 GMT
- Title: Governance of Generative Artificial Intelligence for Companies
- Authors: Johannes Schneider, Rene Abraham, Christian Meske,
- Abstract summary: We develop a framework for GenAI governance within companies.
This framework outlines the scope, objectives, and governance mechanisms tailored to harness business opportunities.
- Score: 1.4003044924094596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Artificial Intelligence (GenAI), specifically large language models like ChatGPT, has swiftly entered organizations without adequate governance, posing both opportunities and risks. Despite extensive debates on GenAI's transformative nature and regulatory measures, limited research addresses organizational governance, encompassing technical and business perspectives. Our review paper fills this gap by surveying recent works with the purpose of developing a framework for GenAI governance within companies. This framework outlines the scope, objectives, and governance mechanisms tailored to harness business opportunities as well as mitigate risks associated with GenAI integration. Our research contributes a focused approach to GenAI governance, offering practical insights for companies navigating the challenges of GenAI adoption and highlighting research gaps.
Related papers
- Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest [0.0]
SecGenAI is a comprehensive security framework for cloud-based GenAI applications.
Aligned with Australian Privacy Principles, AI Ethics Principles, and guidelines from the Australian Cyber Security Centre and Digital Transformation Agency.
arXiv Detail & Related papers (2024-07-01T09:19:50Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment [0.0]
We outline a practical, simple, and sufficiently comprehensive tool to allow for the integration of GenAI tools into educational assessment.
The AI Assessment Scale (AIAS) empowers educators to select the appropriate level of GenAI usage in assessments.
By adopting a practical, flexible approach, the AIAS can form a much-needed starting point to address the current uncertainty and anxiety regarding GenAI in education.
arXiv Detail & Related papers (2023-12-12T09:08:36Z) - Generative Artificial Intelligence in Healthcare: Ethical Considerations
and Assessment Checklist [10.980912140648648]
We conduct a scoping review of ethical discussions on generative artificial intelligence (GenAI) in healthcare.
We propose to reduce the gaps by developing a checklist for comprehensive assessment and transparent documentation of ethical discussions in GenAI research.
arXiv Detail & Related papers (2023-11-02T11:55:07Z) - From Generative AI to Generative Internet of Things: Fundamentals,
Framework, and Outlooks [82.964958051535]
Generative Artificial Intelligence (GAI) possesses the capabilities of generating realistic data and facilitating advanced decision-making.
By integrating GAI into modern Internet of Things (IoT), Generative Internet of Things (GIoT) is emerging and holds immense potential to revolutionize various aspects of society.
arXiv Detail & Related papers (2023-10-27T02:58:11Z) - Generative AI in the Construction Industry: Opportunities & Challenges [2.562895371316868]
Current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector.
This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis.
This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI.
arXiv Detail & Related papers (2023-09-19T18:20:49Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.