AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures
- URL: http://arxiv.org/abs/2309.13176v1
- Date: Fri, 22 Sep 2023 20:45:15 GMT
- Title: AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures
- Authors: Eli Sherman, Ian W. Eisenberg
- Abstract summary: We propose a risk profiling standard which can guide downstream decision-making.
The standard is built on our proposed taxonomy of AI risks, which reflects a high-level categorization of the wide variety of risks proposed in the literature.
We apply this methodology to a number of prominent AI systems using publicly available information.
- Score: 0.8702432681310399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI systems' sophistication and proliferation have increased, awareness of
the risks has grown proportionally (Sorkin et al. 2023). In response, calls
have grown for stronger emphasis on disclosure and transparency in the AI
industry (NTIA 2023; OpenAI 2023b), with proposals ranging from standardizing
use of technical disclosures, like model cards (Mitchell et al. 2019), to
yet-unspecified licensing regimes (Sindhu 2023). Since the AI value chain is
complicated, with actors representing various expertise, perspectives, and
values, it is crucial that consumers of a transparency disclosure be able to
understand the risks of the AI system the disclosure concerns. In this paper we
propose a risk profiling standard which can guide downstream decision-making,
including triaging further risk assessment, informing procurement and
deployment, and directing regulatory frameworks. The standard is built on our
proposed taxonomy of AI risks, which reflects a high-level categorization of
the wide variety of risks proposed in the literature. We outline the myriad
data sources needed to construct informative Risk Profiles and propose a
template-based methodology for collating risk information into a standard, yet
flexible, structure. We apply this methodology to a number of prominent AI
systems using publicly available information. To conclude, we discuss design
decisions for the profiles and future work.
Related papers
- Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems [2.3266896180922187]
We compile an extensive catalog of risk sources and risk management measures for general-purpose AI systems.
This work involves identifying technical, operational, and societal risks across model development, training, and deployment stages.
The catalog is released under a public domain license for ease of direct use by stakeholders in AI governance and standards.
arXiv Detail & Related papers (2024-10-30T21:32:56Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Normative Challenges of Risk Regulation of Artificial Intelligence and
Automated Decision-Making [0.0]
Recent proposals aim at regulating artificial intelligence (AI) and automated decision-making (ADM)
The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission.
This article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved.
arXiv Detail & Related papers (2022-11-11T13:57:38Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - Actionable Guidance for High-Consequence AI Risk Management: Towards
Standards Addressing AI Catastrophic Risks [12.927021288925099]
Artificial intelligence (AI) systems can present risks of events with very high or catastrophic consequences at societal scale.
NIST is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management.
We provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences.
arXiv Detail & Related papers (2022-06-17T18:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.