Cultural Dimensions of Artificial Intelligence Adoption: Empirical Insights for Wave 1 from a Multinational Longitudinal Pilot Study
- URL: http://arxiv.org/abs/2510.19743v1
- Date: Wed, 22 Oct 2025 16:31:28 GMT
- Title: Cultural Dimensions of Artificial Intelligence Adoption: Empirical Insights for Wave 1 from a Multinational Longitudinal Pilot Study
- Authors: Michelle J. Cummings-Koether, Franziska Durner, Theophile Shyiramunda, Matthias Huemmer,
- Abstract summary: The swift diffusion of artificial intelligence (AI) raises critical questions about how cultural contexts shape adoption patterns and their consequences for human daily life.<n>This study investigates the cultural dimensions of AI adoption and their influence on cognitive strategies across nine national contexts in Europe, Africa, Asia, and South America.<n>Results reveal two key findings: First, cultural factors, particularly language and age, significantly affect AI adoption and perceptions of reliability with older participants reporting higher engagement with AI for educational purposes.<n>Second, ethical judgment about AI use varied across domains, with professional contexts normalizing its role as a pragmatic collaborator while academic settings emphasized risks of plagiarism.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The swift diffusion of artificial intelligence (AI) raises critical questions about how cultural contexts shape adoption patterns and their consequences for human daily life. This study investigates the cultural dimensions of AI adoption and their influence on cognitive strategies across nine national contexts in Europe, Africa, Asia, and South America. Drawing on survey data from a diverse pilot sample (n = 21) and guided by cross-cultural psychology, digital ethics, and sociotechnical systems theory, we examine how demographic variables (age, gender, professional role) and cultural orientations (language, values, and institutional exposure) mediate perceptions of trust, ethical acceptability, and reliance on AI. Results reveal two key findings: First, cultural factors, particularly language and age, significantly affect AI adoption and perceptions of reliability with older participants reporting higher engagement with AI for educational purposes. Second, ethical judgment about AI use varied across domains, with professional contexts normalizing its role as a pragmatic collaborator while academic settings emphasized risks of plagiarism. These findings extend prior research on culture and technology adoption by demonstrating that AI use is neither universal nor neutral but culturally contingent, domain-specific, and ethically situated. The study highlights implications for AI use in education, professional practice, and global technology policy, pointing at actions that enable usage of AI in a way that is both culturally adaptive and ethically robust.
Related papers
- AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA) [45.88028371034407]
This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement.<n>We discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness.<n>We conclude that while there are very significant technical hurdles to real human enhancement through AI, and significant ethical problems, there are also significant benefits that may realistically be achieved.
arXiv Detail & Related papers (2025-08-18T09:39:11Z) - Between Regulation and Accessibility: How Chinese University Students Navigate Global and Domestic Generative AI [13.558080205608668]
This study investigates how Chinese university students interact with both global and domestic generative AIs in the learning process.<n>Findings reveal that engagement is shaped by accessibility, language proficiency, and cultural relevance.<n>It advocates for human-centered, multilingual, domestic context-sensitive AI integration to ensure equitable and inclusive digital learning environments.
arXiv Detail & Related papers (2025-06-17T10:24:12Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - The Honorific Effect: Exploring the Impact of Japanese Linguistic Formalities on AI-Generated Physics Explanations [0.0]
This study investigates the influence of Japanese honorifics on the responses of large language models (LLMs) when explaining the law of conservation of momentum.
We analyzed the outputs of six state-of-the-art AI models, including variations of ChatGPT, Coral, and Gemini, using 14 different honorific forms.
arXiv Detail & Related papers (2024-07-12T11:31:00Z) - Position: Towards Bidirectional Human-AI Alignment [109.57781720848669]
We argue that the research community should explicitly define and critically reflect on "alignment" to account for the bidirectional and dynamic relationship between humans and AI.<n>We introduce the Bidirectional Human-AI Alignment framework, which not only incorporates traditional efforts to align AI with human values but also introduces the critical, underexplored dimension of aligning humans with AI.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - How Culture Shapes What People Want From AI [0.0]
There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments.
We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI.
arXiv Detail & Related papers (2024-03-08T07:08:19Z) - Cultural Incongruencies in Artificial Intelligence [5.817158625734485]
We describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies.
Problems arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices.
arXiv Detail & Related papers (2022-11-19T18:45:02Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.