The Runtime Dimension of Ethics in Self-Adaptive Systems
- URL: http://arxiv.org/abs/2602.17426v1
- Date: Thu, 19 Feb 2026 14:57:30 GMT
- Title: The Runtime Dimension of Ethics in Self-Adaptive Systems
- Authors: Marco Autili, Gianluca Filippone, Mashal Afzal Memon, Patrizio Pelliccione,
- Abstract summary: Self-adaptive systems operate in close interaction with humans, often sharing the same physical or virtual environments.<n>Current approaches encode ethics as fixed, rule-based constraints or as a single chosen ethical theory embedded at design time.<n>This paper advocates a shift from static ethical rules to runtime ethical reasoning for self-adaptive systems.
- Score: 5.0803118668252925
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Self-adaptive systems increasingly operate in close interaction with humans, often sharing the same physical or virtual environments and making decisions with ethical implications at runtime. Current approaches typically encode ethics as fixed, rule-based constraints or as a single chosen ethical theory embedded at design time. This overlooks a fundamental property of human-system interaction settings: ethical preferences vary across individuals and groups, evolve with context, and may conflict, while still needing to remain within a legally and regulatorily defined hard-ethics envelope (e.g., safety and compliance constraints). This paper advocates a shift from static ethical rules to runtime ethical reasoning for self-adaptive systems, where ethical preferences are treated as runtime requirements that must be elicited, represented, and continuously revised as stakeholders and situations change. We argue that satisfying such requirements demands explicit ethics-based negotiation to manage ethical trade-offs among multiple humans who interact with, are represented by, or are affected by a system. We identify key challenges, ethical uncertainty, conflicts among ethical values (including human, societal, and environmental drivers), and multi-dimensional/multi-party/multi-driver negotiation, and outline research directions and questions toward ethically self-adaptive systems.
Related papers
- Operationalizing Human Values in the Requirements Engineering Process of Ethics-Aware Autonomous Systems [3.535946769391712]
We propose a requirements engineering approach for ethics-aware autonomous systems.<n>We capture human values as normative goals and align them with functional and adaptation goals.<n>We demonstrate the feasibility of the approach through a medical Body Sensor Network case study.
arXiv Detail & Related papers (2026-02-10T15:54:25Z) - Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Fuzzy Representation of Norms [1.0098114696565863]
This paper proposes a logical representation of SLEEC rules and presents a methodology to embed these ethical requirements using test-score semantics and fuzzy logic.<n>The use of fuzzy logic is motivated by the view of ethics as a domain of possibilities, which allows the resolution of ethical dilemmas that AI systems may encounter.
arXiv Detail & Related papers (2026-01-06T12:51:18Z) - Principles and Reasons Behind Automated Vehicle Decisions in Ethically Ambiguous Everyday Scenarios [4.244307111313931]
We propose a principled conceptual framework for AV decision-making in routine, ethically ambiguous scenarios.<n>The framework supports dynamic, human-aligned behaviour by prioritising safety, allowing pragmatic actions when strict legal adherence would undermine key values.
arXiv Detail & Related papers (2025-07-18T11:52:33Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [51.85131234265026]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values [0.0]
Ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation.
This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4.
arXiv Detail & Related papers (2024-01-09T14:57:30Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Macro Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions [1.864621482724548]
We develop a taxonomy of 21 normative ethical principles which can be operationalised in AI.
We envision this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
arXiv Detail & Related papers (2022-08-12T08:48:16Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.