Learning from Learning Machines: Optimisation, Rules, and Social Norms
- URL: http://arxiv.org/abs/2001.00006v1
- Date: Sun, 29 Dec 2019 17:42:06 GMT
- Title: Learning from Learning Machines: Optimisation, Rules, and Social Norms
- Authors: Travis LaCroix and Yoshua Bengio
- Abstract summary: It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an analogy between machine learning systems and economic entities in
that they are both adaptive, and their behaviour is specified in a more-or-less
explicit way. It appears that the area of AI that is most analogous to the
behaviour of economic entities is that of morally good decision-making, but it
is an open question as to how precisely moral behaviour can be achieved in an
AI system. This paper explores the analogy between these two complex systems,
and we suggest that a clearer understanding of this apparent analogy may help
us forward in both the socio-economic domain and the AI domain: known results
in economics may help inform feasible solutions in AI safety, but also known
results in AI may inform economic policy. If this claim is correct, then the
recent successes of deep learning for AI suggest that more implicit
specifications work better than explicit ones for solving such problems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems [0.0]
There still exists a gap between principles and practices in AI ethics.
One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope.
arXiv Detail & Related papers (2024-07-07T12:16:01Z) - Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization [0.0]
We show how much of the criticisms directed towards AI systems spring from well known tensions at the heart of Weberian rationalization.
Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible.
It also highlights that AI driven policy optimization comes at the exclusion of other competing political values.
arXiv Detail & Related papers (2024-07-07T11:54:14Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines [0.0]
Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
arXiv Detail & Related papers (2023-02-08T17:39:58Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - The AI Economist: Optimal Economic Policy Design via Two-level Deep
Reinforcement Learning [126.37520136341094]
We show that machine-learning-based economic simulation is a powerful policy and mechanism design framework.
The AI Economist is a two-level, deep RL framework that trains both agents and a social planner who co-adapt.
In simple one-step economies, the AI Economist recovers the optimal tax policy of economic theory.
arXiv Detail & Related papers (2021-08-05T17:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.