Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
- URL: http://arxiv.org/abs/2401.13142v4
- Date: Thu, 25 Jul 2024 19:28:46 GMT
- Title: Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
- Authors: Borhane Blili-Hamelin, Leif Hancox-Li, Andrew Smart,
- Abstract summary: We argue that the meaning of human-level AI or artificial general intelligence (AGI) remains elusive and contested.
We provide a taxonomy of AGI definitions, laying the ground for examining the key social, political, and ethical assumptions they make.
We propose contextual, democratic, and participatory paths to imagining future forms of machine intelligence.
- Score: 1.9506923346234724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dreams of machines rivaling human intelligence have shaped the field of AI since its inception. Yet, the very meaning of human-level AI or artificial general intelligence (AGI) remains elusive and contested. Definitions of AGI embrace a diverse range of incompatible values and assumptions. Contending with the fractured worldviews of AGI discourse is vital for critiques that pursue different values and futures. To that end, we provide a taxonomy of AGI definitions, laying the ground for examining the key social, political, and ethical assumptions they make. We highlight instances in which these definitions frame AGI or human-level AI as a technical topic and expose the value-laden choices being implicitly made. Drawing on feminist, STS, and social science scholarship on the political and social character of intelligence in both humans and machines, we propose contextual, democratic, and participatory paths to imagining future forms of machine intelligence. The development of future forms of AI must involve explicit attention to the values it encodes, the people it includes or excludes, and a commitment to epistemic justice.
Related papers
- A.I. go by many names: towards a sociotechnical definition of artificial intelligence [0.0]
Defining artificial intelligence (AI) is a persistent challenge, often muddied by technical ambiguity and varying interpretations.
This essay makes a case for a sociotechnical definition of AI, which is essential for researchers who require clarity in their work.
arXiv Detail & Related papers (2024-10-17T11:25:50Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - When Brain-inspired AI Meets AGI [40.96159978312796]
We provide a comprehensive overview of brain-inspired AI from the perspective of Artificial General Intelligence.
We begin with the current progress in brain-inspired AI and its extensive connection with AGI.
We then cover the important characteristics for both human intelligence and AGI.
arXiv Detail & Related papers (2023-03-28T12:46:38Z) - Recognition of All Categories of Entities by AI [20.220102335024706]
This paper presents a new argumentative option to view the ontological sextet as a comprehensive technological map.
We predict that in the relatively near future, AI will be able to recognize various entities to the same degree as humans.
arXiv Detail & Related papers (2022-08-13T08:00:42Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Human $\neq$ AGI [1.370633147306388]
General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used to interchangeably refer to the Holy Grail of Artificial Intelligence research.
This paper argues that implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences.
arXiv Detail & Related papers (2020-07-11T14:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.