AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures
- URL: http://arxiv.org/abs/2309.13176v1
- Date: Fri, 22 Sep 2023 20:45:15 GMT
- Title: AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures
- Authors: Eli Sherman, Ian W. Eisenberg
- Abstract summary: We propose a risk profiling standard which can guide downstream decision-making.
The standard is built on our proposed taxonomy of AI risks, which reflects a high-level categorization of the wide variety of risks proposed in the literature.
We apply this methodology to a number of prominent AI systems using publicly available information.
- Score: 0.8702432681310399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI systems' sophistication and proliferation have increased, awareness of
the risks has grown proportionally (Sorkin et al. 2023). In response, calls
have grown for stronger emphasis on disclosure and transparency in the AI
industry (NTIA 2023; OpenAI 2023b), with proposals ranging from standardizing
use of technical disclosures, like model cards (Mitchell et al. 2019), to
yet-unspecified licensing regimes (Sindhu 2023). Since the AI value chain is
complicated, with actors representing various expertise, perspectives, and
values, it is crucial that consumers of a transparency disclosure be able to
understand the risks of the AI system the disclosure concerns. In this paper we
propose a risk profiling standard which can guide downstream decision-making,
including triaging further risk assessment, informing procurement and
deployment, and directing regulatory frameworks. The standard is built on our
proposed taxonomy of AI risks, which reflects a high-level categorization of
the wide variety of risks proposed in the literature. We outline the myriad
data sources needed to construct informative Risk Profiles and propose a
template-based methodology for collating risk information into a standard, yet
flexible, structure. We apply this methodology to a number of prominent AI
systems using publicly available information. To conclude, we discuss design
decisions for the profiles and future work.
Related papers
- SAIF: A Comprehensive Framework for Evaluating the Risks of Generative AI in the Public Sector [4.710921988115686]
We propose a Systematic dAta generatIon Framework for evaluating the risks of generative AI (SAIF)
SAIF involves four key stages: breaking down risks, designing scenarios, applying jailbreak methods, and exploring prompt types.
We believe that this study can play a crucial role in fostering the safe and responsible integration of generative AI into the public sector.
arXiv Detail & Related papers (2025-01-15T14:12:38Z) - Supervision policies can shape long-term risk management in general-purpose AI models [0.0]
We develop a simulation framework parameterized by features extracted from the diverse landscape of risk, incident, or hazard reporting ecosystems.
We evaluate four supervision policies: non-prioritized (first-come, first-served), random selection, priority-based (addressing the highest-priority risks first), and diversity-prioritized (balancing high-priority risks with comprehensive coverage across risk types)
Our results indicate that while priority-based and diversity-prioritized policies are more effective at mitigating high-impact risks, they may inadvertently neglect systemic issues reported by the broader community.
arXiv Detail & Related papers (2025-01-10T17:52:34Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.
Risks can be quantified using metrics from the technical community.
This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - Actionable Guidance for High-Consequence AI Risk Management: Towards
Standards Addressing AI Catastrophic Risks [12.927021288925099]
Artificial intelligence (AI) systems can present risks of events with very high or catastrophic consequences at societal scale.
NIST is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management.
We provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences.
arXiv Detail & Related papers (2022-06-17T18:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.