QB4AIRA: A Question Bank for AI Risk Assessment
- URL: http://arxiv.org/abs/2305.09300v2
- Date: Tue, 11 Jul 2023 01:57:28 GMT
- Title: QB4AIRA: A Question Bank for AI Risk Assessment
- Authors: Sung Une Lee, Harsha Perera, Boming Xia, Yue Liu, Qinghua Lu, Liming
Zhu, Olivier Salvado, Jon Whittle
- Abstract summary: QB4AIRA comprises 293 prioritized questions covering a wide range of AI risk areas.
It serves as a valuable resource for stakeholders in assessing and managing AI risks.
- Score: 19.783485414942284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of Artificial Intelligence (AI), represented by
ChatGPT, has raised concerns about responsible AI development and utilization.
Existing frameworks lack a comprehensive synthesis of AI risk assessment
questions. To address this, we introduce QB4AIRA, a novel question bank
developed by refining questions from five globally recognized AI risk
frameworks, categorized according to Australia's AI ethics principles. QB4AIRA
comprises 293 prioritized questions covering a wide range of AI risk areas,
facilitating effective risk assessment. It serves as a valuable resource for
stakeholders in assessing and managing AI risks, while paving the way for new
risk frameworks and guidelines. By promoting responsible AI practices, QB4AIRA
contributes to responsible AI deployment, mitigating potential risks and harms.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence [35.77247656798871]
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public.
A lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them.
This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference.
arXiv Detail & Related papers (2024-08-14T10:32:06Z) - Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment [17.026921603767722]
The study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives.
By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks.
arXiv Detail & Related papers (2024-08-02T22:40:20Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - AI and the Iterable Epistopics of Risk [1.26404863283601]
The risks AI presents to society are broadly understood to be manageable through general calculus.
This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert.
arXiv Detail & Related papers (2024-04-29T13:33:22Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - An International Consortium for Evaluations of Societal-Scale Risks from
Advanced AI [10.550015825854837]
A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight.
frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems.
This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators.
arXiv Detail & Related papers (2023-10-22T23:37:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.