Operationalising Rawlsian Ethics for Fairness in Norm-Learning Agents
- URL: http://arxiv.org/abs/2412.15163v1
- Date: Thu, 19 Dec 2024 18:38:13 GMT
- Title: Operationalising Rawlsian Ethics for Fairness in Norm-Learning Agents
- Authors: Jessica Woodgate, Paul Marshall, Nirav Ajmeri,
- Abstract summary: We present RAWL-E, a method to create ethical norm-learning agents.
We find that norms emerging in RAWL-E agent societies enhance social welfare, fairness, and robustness.
- Score: 4.891538364735141
- License:
- Abstract: Social norms are standards of behaviour common in a society. However, when agents make decisions without considering how others are impacted, norms can emerge that lead to the subjugation of certain agents. We present RAWL-E, a method to create ethical norm-learning agents. RAWL-E agents operationalise maximin, a fairness principle from Rawlsian ethics, in their decision-making processes to promote ethical norms by balancing societal well-being with individual goals. We evaluate RAWL-E agents in simulated harvesting scenarios. We find that norms emerging in RAWL-E agent societies enhance social welfare, fairness, and robustness, and yield higher minimum experience compared to those that emerge in agent societies that do not implement Rawlsian ethics.
Related papers
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Learning and Sustaining Shared Normative Systems via Bayesian Rule
Induction in Markov Games [2.307051163951559]
We build learning agents that cooperate flexibly with the human institutions they are embedded in.
By assuming shared norms, a newly introduced agent can infer the norms of an existing population from observations of compliance and violation.
Since agents can bootstrap common knowledge of the norms, this leads the norms to be widely adhered to, enabling new entrants to rapidly learn those norms.
arXiv Detail & Related papers (2024-02-20T21:58:40Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Value Engineering for Autonomous Agents [3.6130723421895947]
Previous approaches have treated values as labels associated with some actions or states of the world, rather than as integral components of agent reasoning.
We propose a new AMA paradigm grounded in moral and social psychology, where values are instilled into agents as context-dependent goals.
We argue that this type of normative reasoning, where agents are endowed with an understanding of norms' moral implications, leads to value-awareness in autonomous agents.
arXiv Detail & Related papers (2023-02-17T08:52:15Z) - Socially Intelligent Genetic Agents for the Emergence of Explicit Norms [0.0]
We address the emergence of explicit norms by developing agents who provide and reason about explanations for norm violations.
These agents use a genetic algorithm to produce norms and reinforcement learning to learn the values of these norms.
We find that applying explanations leads to norms that provide better cohesion and goal satisfaction for the agents.
arXiv Detail & Related papers (2022-08-07T18:48:48Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Noe: Norms Emergence and Robustness Based on Emotions in Multiagent
Systems [0.0]
This paper investigates how modeling emotions affect the emergence and robustness of social norms via social simulation experiments.
We find that an ability in agents to consider emotional responses to the outcomes of norm satisfaction and violation promote norm compliance.
arXiv Detail & Related papers (2021-04-30T14:42:22Z) - Moral Stories: Situated Reasoning about Norms, Intents, Actions, and
their Consequences [36.884156839960184]
We investigate whether contemporary NLG models can function as behavioral priors for systems deployed in social settings.
We introduce 'Moral Stories', a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning.
arXiv Detail & Related papers (2020-12-31T17:28:01Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.