Assessing the Case for Africa-Centric AI Safety Evaluations
- URL: http://arxiv.org/abs/2602.13757v1
- Date: Sat, 14 Feb 2026 13:04:52 GMT
- Title: Assessing the Case for Africa-Centric AI Safety Evaluations
- Authors: Gathoni Ireri, Cecil Abungu, Jean Cheptumo, Sienka Dounia, Mark Gitau, Stephanie Kasaon, Michael Michie, Chinasa Okolo, Jonathan Shock,
- Abstract summary: We develop a taxonomy for identifying Africa-centric severe AI risks.<n>We propose threat modelling strategies for African contexts.<n>We offer practical guidance for running evaluations under resource constraints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Frontier AI systems are being adopted across Africa, yet most AI safety evaluations are designed and validated in Western environments. In this paper, we argue that the portability gap can leave Africa-centric pathways to severe harm untested when frontier AI systems are embedded in materially constrained and interdependent infrastructures. We define severe AI risks as material risks from frontier AI systems that result in critical harm, measured as the grave injury or death of thousands of people or economic loss and damage equivalent to five percent of a country's GDP. To support AI safety evaluation design, we develop a taxonomy for identifying Africa-centric severe AI risks. The taxonomy links outcome thresholds to process pathways that model risk as the intersection of hazard, vulnerability, and exposure. We distinguish severe risks by amplification and suddenness, where amplification requires that frontier AI be a necessary magnifier of latent danger and suddenness captures harms that materialise rapidly enough to overwhelm ordinary coping and governance capacity. We then propose threat modelling strategies for African contexts, surveying reference class forecasting, structured expert elicitation, scenario planning, and system theoretic process analysis, and tailoring them to constraints of limited resources, poor connectivity, limited technical expertise, weak state capacity, and conflict. We also examine AI misalignment risk, concluding that Africa is more likely to expose universal failure modes through distributional shift than to generate distinct pathways of misalignment. Finally, we offer practical guidance for running evaluations under resource constraints, emphasising open and extensible tooling, tiered evaluation pipelines, and sharing methods and findings to broaden evaluation scope.
Related papers
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5 [61.787178868669265]
This technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R&D, and self-replication.<n>This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
arXiv Detail & Related papers (2026-02-16T04:30:06Z) - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - The Role of Risk Modeling in Advanced AI Risk Management [33.357295564462284]
Rapidly advancing artificial intelligence (AI) systems introduce novel, uncertain, and potentially catastrophic risks.<n>Managing these risks requires a mature risk-management infrastructure whose cornerstone is rigorous risk modeling.<n>We argue that advanced-AI governance should adopt a similar dual approach and that verifiable, provably-safe AI architectures are urgently needed.
arXiv Detail & Related papers (2025-12-09T15:37:33Z) - Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report [51.17413460785022]
This report presents a comprehensive assessment of their frontier risks.<n>We identify critical risks in seven areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&D, strategic deception and scheming, self-replication, and collusion.
arXiv Detail & Related papers (2025-07-22T12:44:38Z) - Systematic Hazard Analysis for Frontier AI using STPA [0.0]
frontier AI companies currently do not describe in detail any structured approach to identifying and analysing hazards.<n>A (Systems-Theoretic Process Analysis) is a systematic methodology for identifying how complex systems can become unsafe, leading to hazards.<n>We evaluateA's ability to broaden the scope, improve traceability and strengthen the robustness of safety assurance for frontier AI systems.
arXiv Detail & Related papers (2025-06-02T15:28:34Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.<n>Risks can be quantified using metrics from the technical community.<n>This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.