A Brief Overview of AI Governance for Responsible Machine Learning
Systems
- URL: http://arxiv.org/abs/2211.13130v1
- Date: Mon, 21 Nov 2022 23:48:51 GMT
- Title: A Brief Overview of AI Governance for Responsible Machine Learning
Systems
- Authors: Navdeep Gill and Abhishek Mathur and Marcos V. Conde
- Abstract summary: This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI.
Due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies.
- Score: 3.222802562733787
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Organizations of all sizes, across all industries and domains are leveraging
artificial intelligence (AI) technologies to solve some of their biggest
challenges around operations, customer experience, and much more. However, due
to the probabilistic nature of AI, the risks associated with it are far greater
than traditional technologies. Research has shown that these risks can range
anywhere from regulatory, compliance, reputational, and user trust, to
financial and even societal risks. Depending on the nature and size of the
organization, AI technologies can pose a significant risk, if not used in a
responsible way. This position paper seeks to present a brief introduction to
AI governance, which is a framework designed to oversee the responsible use of
AI with the goal of preventing and mitigating risks. Having such a framework
will not only manage risks but also gain maximum value out of AI projects and
develop consistency for organization-wide adoption of AI.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Three lines of defense against risks from AI [0.0]
It is not always clear who is responsible for AI risk management.
The Three Lines of Defense (3LoD) model is considered best practice in many industries.
I suggest ways in which AI companies could implement the model.
arXiv Detail & Related papers (2022-12-16T09:33:00Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Artificial Tikkun Olam: AI Can Be Our Best Friend in Building an Open
Human-Computer Society [0.0]
We review a long-standing wisdom that a widespread practical deployment of any technology may produce adverse side effects misusing the knowhow.
We describe some of the common and AI specific risks in health industries and other sectors.
We propose a simple intelligent system quotient that may correspond to their adverse societal impact.
arXiv Detail & Related papers (2020-10-20T23:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.