The Principle of Proportional Duty: A Knowledge-Duty Framework for Ethical Equilibrium in Human and Artificial Systems
- URL: http://arxiv.org/abs/2512.15740v1
- Date: Sun, 07 Dec 2025 02:37:07 GMT
- Title: The Principle of Proportional Duty: A Knowledge-Duty Framework for Ethical Equilibrium in Human and Artificial Systems
- Authors: Timothy Prescher,
- Abstract summary: This paper introduces the Principle of Proportional Duty (PPD), a novel framework that models how ethical responsibility scales with an agent's epistemic state.<n>As uncertainty increases, Action Duty (the duty to act decisively) is proportionally converted into Repair Duty (the active duty to verify, inquire, and resolve uncertainty).<n>This paper applies the framework across four domains, clinical ethics, recipient-rights law, economic governance, and artificial intelligence, to demonstrate its cross-disciplinary validity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional ethical frameworks often struggle to model decision-making under uncertainty, treating it as a simple constraint on action. This paper introduces the Principle of Proportional Duty (PPD), a novel framework that models how ethical responsibility scales with an agent's epistemic state. The framework reveals that moral duty is not lost to uncertainty but transforms: as uncertainty increases, Action Duty (the duty to act decisively) is proportionally converted into Repair Duty (the active duty to verify, inquire, and resolve uncertainty). This dynamic is expressed by the equation D_total = K[(1-HI) + HI * g(C_signal)], where Total Duty is a function of Knowledge (K), Humility/Uncertainty (HI), and Contextual Signal Strength (C_signal). Monte Carlo simulations demonstrate that systems maintaining a baseline humility coefficient (lambda > 0) produce more stable duty allocations and reduce the risk of overconfident decision-making. By formalizing humility as a system parameter, the PPD offers a mathematically tractable approach to moral responsibility that could inform the development of auditable AI decision systems. This paper applies the framework across four domains, clinical ethics, recipient-rights law, economic governance, and artificial intelligence, to demonstrate its cross-disciplinary validity. The findings suggest that proportional duty serves as a stabilizing principle within complex systems, preventing both overreach and omission by dynamically balancing epistemic confidence against contextual risk.
Related papers
- fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation [0.0]
We introduce an Explainability and Traceability Module (ETM) that explicitly links each ethical decision rule to the underlying moral principles.<n>We replace single-referent validation with a pluralistic semantic validation framework.<n>The resulting extended fEDM, called fEDM+, preserves formal verifiability while achieving enhanced interpretability and stakeholder-aware validation.
arXiv Detail & Related papers (2026-02-25T09:58:14Z) - Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort [0.31061678033205636]
Modern Artificial Intelligence (AI) systems lack human-like consciousness or culpability.<n> Fluid agency generates valuable outputs but collapses attribution.<n>This Article argues that only functional equivalence stabilizes doctrine.
arXiv Detail & Related papers (2026-01-06T01:06:07Z) - From Linear Risk to Emergent Harm: Complexity as the Missing Core of AI Governance [0.0]
Risk-based AI regulation promises proportional controls aligned with anticipated harms.<n>This paper argues that such frameworks often fail for structural reasons.<n>We propose a complexity-based framework for AI governance that treats regulation as intervention rather than control.
arXiv Detail & Related papers (2025-12-14T14:19:21Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Towards Responsible AI: Advances in Safety, Fairness, and Accountability of Autonomous Systems [0.913755431537592]
This thesis advances knowledge in the safety, fairness, transparency, and accountability of AI systems.<n>We extend classical deterministic shielding techniques to become resilient against delayed observations.<n>We introduce fairness shields, a novel post-processing approach to enforce group fairness in sequential decision-making settings.
arXiv Detail & Related papers (2025-06-11T21:30:02Z) - Beyond Explainability: The Case for AI Validation [0.0]
We argue for a shift toward validation as a central regulatory pillar.<n> Validation, ensuring the reliability, consistency, and robustness of AI outputs, offers a more practical, scalable, and risk-sensitive alternative to explainability.<n>We propose a forward-looking policy framework centered on pre- and post-deployment validation, third-party auditing, harmonized standards, and liability incentives.
arXiv Detail & Related papers (2025-05-27T06:42:41Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.<n>We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - Prioritization First, Principles Second: An Adaptive Interpretation of Helpful, Honest, and Harmless Principles [30.405680322319242]
The Helpful, Honest, and Harmless (HHH) principle is a framework for aligning AI systems with human values.<n>We argue for an adaptive interpretation of the HHH principle and propose a reference framework for its adaptation to diverse scenarios.<n>This work offers practical insights for improving AI alignment, ensuring that HHH principles remain both grounded and operationally effective in real-world AI deployment.
arXiv Detail & Related papers (2025-02-09T22:41:24Z) - Trustworthiness in Stochastic Systems: Towards Opening the Black Box [1.7355698649527407]
behavior by an AI system threatens to undermine alignment and potential trust.<n>We take a philosophical perspective to the tension and potential conflict between foundationality and trustworthiness.<n>We propose latent value modeling for both AI systems and users to better assess alignment.
arXiv Detail & Related papers (2025-01-27T19:43:09Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.