ff4ERA: A new Fuzzy Framework for Ethical Risk Assessment in AI
- URL: http://arxiv.org/abs/2508.00899v1
- Date: Mon, 28 Jul 2025 14:41:36 GMT
- Title: ff4ERA: A new Fuzzy Framework for Ethical Risk Assessment in AI
- Authors: Abeer Dyoub, Ivan Letteri, Francesca A. Lisi,
- Abstract summary: We introduce ff4ERA, a fuzzy framework that integrates Fuzzy Logic, the Fuzzy Analytic Hierarchy Process (FAHP), and Certainty Factors (CF)<n>The framework offers a robust mathematical approach for collaborative Ethical Risk Assessment modeling and systematic, step-by-step analysis.<n>A case study confirms that ff4ERA yields context-sensitive, meaningful risk scores reflecting both expert input and sensor-based evidence.
- Score: 0.24578723416255746
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The emergence of Symbiotic AI (SAI) introduces new challenges to ethical decision-making as it deepens human-AI collaboration. As symbiosis grows, AI systems pose greater ethical risks, including harm to human rights and trust. Ethical Risk Assessment (ERA) thus becomes crucial for guiding decisions that minimize such risks. However, ERA is hindered by uncertainty, vagueness, and incomplete information, and morality itself is context-dependent and imprecise. This motivates the need for a flexible, transparent, yet robust framework for ERA. Our work supports ethical decision-making by quantitatively assessing and prioritizing multiple ethical risks so that artificial agents can select actions aligned with human values and acceptable risk levels. We introduce ff4ERA, a fuzzy framework that integrates Fuzzy Logic, the Fuzzy Analytic Hierarchy Process (FAHP), and Certainty Factors (CF) to quantify ethical risks via an Ethical Risk Score (ERS) for each risk type. The final ERS combines the FAHP-derived weight, propagated CF, and risk level. The framework offers a robust mathematical approach for collaborative ERA modeling and systematic, step-by-step analysis. A case study confirms that ff4ERA yields context-sensitive, ethically meaningful risk scores reflecting both expert input and sensor-based evidence. Risk scores vary consistently with relevant factors while remaining robust to unrelated inputs. Local sensitivity analysis shows predictable, mostly monotonic behavior across perturbations, and global Sobol analysis highlights the dominant influence of expert-defined weights and certainty factors, validating the model design. Overall, the results demonstrate ff4ERA ability to produce interpretable, traceable, and risk-aware ethical assessments, enabling what-if analyses and guiding designers in calibrating membership functions and expert judgments for reliable ethical decision support.
Related papers
- Adapting Probabilistic Risk Assessment for AI [0.0]
General-purpose artificial intelligence (AI) systems present an urgent risk management challenge.<n>Current methods often rely on selective testing and undocumented assumptions about risk priorities.<n>This paper introduces the probabilistic risk assessment (PRA) for AI framework.
arXiv Detail & Related papers (2025-04-25T17:59:14Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) [0.0]
Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.<n>They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
arXiv Detail & Related papers (2025-01-20T19:38:21Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Elevating Software Trust: Unveiling and Quantifying the Risk Landscape [9.428116807615407]
We propose a risk assessment framework called SAFER (Software Analysis Framework for Evaluating Risk)<n>This framework is based on the necessity of a dynamic, data-driven, and adaptable process to quantify security risk in the software supply chain.<n>The results suggest that SAFER mitigates subjectivity and yields dynamic data-driven weights as well as security risk scores.
arXiv Detail & Related papers (2024-08-06T00:50:08Z) - AI as Decision-Maker: Ethics and Risk Preferences of LLMs [0.0]
Large Language Models (LLMs) exhibit surprisingly diverse risk preferences when acting as AI decision makers.<n>We analyze 50 LLMs using behavioral tasks, finding stable but diverse risk profiles.
arXiv Detail & Related papers (2024-06-03T10:05:25Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Model evaluation for extreme risks [46.53170857607407]
Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills.
We explain why model evaluation is critical for addressing extreme risks.
arXiv Detail & Related papers (2023-05-24T16:38:43Z) - Quantifying Uncertainty in Risk Assessment using Fuzzy Theory [0.0]
Risk specialists are trying to understand risk better and use complex models for risk assessment.
Traditional risk models are based on classical set theory.
We will discuss the methodology, framework, and process of using fuzzy logic systems in risk assessment.
arXiv Detail & Related papers (2020-09-20T02:12:44Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.