Cultural Incongruencies in Artificial Intelligence
- URL: http://arxiv.org/abs/2211.13069v1
- Date: Sat, 19 Nov 2022 18:45:02 GMT
- Title: Cultural Incongruencies in Artificial Intelligence
- Authors: Vinodkumar Prabhakaran, Rida Qadri, Ben Hutchinson
- Abstract summary: We describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies.
Problems arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices.
- Score: 5.817158625734485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) systems attempt to imitate human behavior. How
well they do this imitation is often used to assess their utility and to
attribute human-like (or artificial) intelligence to them. However, most work
on AI refers to and relies on human intelligence without accounting for the
fact that human behavior is inherently shaped by the cultural contexts they are
embedded in, the values and beliefs they hold, and the social practices they
follow. Additionally, since AI technologies are mostly conceived and developed
in just a handful of countries, they embed the cultural values and practices of
these countries. Similarly, the data that is used to train the models also
fails to equitably represent global cultural diversity. Problems therefore
arise when these technologies interact with globally diverse societies and
cultures, with different values and interpretive practices. In this position
paper, we describe a set of cultural dependencies and incongruencies in the
context of AI-based language and vision technologies, and reflect on the
possibilities of and potential strategies towards addressing these
incongruencies.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Generative AI, Pragmatics, and Authenticity in Second Language Learning [0.0]
There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching.
However, due to how AI systems under-stand human language, they lack the lived experience to be able to use language with the same social awareness as humans.
There are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources.
arXiv Detail & Related papers (2024-10-18T11:58:03Z) - Modelling Human Values for AI Reasoning [2.320648715016106]
We detail a formal model of human values for their explicit computational representation.
We show how this model can provide the foundational apparatus for AI-based reasoning over values.
We propose a roadmap for future integrated, and interdisciplinary, research into human values in AI.
arXiv Detail & Related papers (2024-02-09T12:08:49Z) - Culturally-Attuned Moral Machines: Implicit Learning of Human Value
Systems by AI through Inverse Reinforcement Learning [11.948092546676687]
We argue that the value system of an AI should be culturally attuned.
How AI systems might acquire such codes from human observation and interaction has remained an open question.
We show that an AI agent learning from the average behavior of a particular cultural group can acquire altruistic characteristics reflective of that group's behavior.
arXiv Detail & Related papers (2023-12-29T05:39:10Z) - Towards a Praxis for Intercultural Ethics in Explainable AI [1.90365714903665]
This article introduces the concept of an intercultural ethics approach to AI explainability.
It examines how cultural nuances impact the adoption and use of technology, the factors that impede how technical concepts such as AI are explained, and how integrating an intercultural ethics approach in the development of XAI can improve user understanding and facilitate efficient usage of these methods.
arXiv Detail & Related papers (2023-04-24T07:15:58Z) - An Analytics of Culture: Modeling Subjectivity, Scalability,
Contextuality, and Temporality [13.638494941763637]
There is a bidirectional relationship between culture and AI; AI models are increasingly used to analyse culture, thereby shaping our understanding of culture.
On the other hand, the models are trained on collections of cultural artifacts thereby implicitly, and not always correctly, encoding expressions of culture.
This creates a tension that both limits the use of AI for analysing culture and leads to problems in AI with respect to cultural complex issues such as bias.
arXiv Detail & Related papers (2022-11-14T15:42:27Z) - Learning Robust Real-Time Cultural Transmission without Human Data [82.05222093231566]
We provide a method for generating zero-shot, high recall cultural transmission in artificially intelligent agents.
Our agents succeed at real-time cultural transmission from humans in novel contexts without using any pre-collected human data.
This paves the way for cultural evolution as an algorithm for developing artificial general intelligence.
arXiv Detail & Related papers (2022-03-01T19:32:27Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.