What You Use is What You Get: Unforced Errors in Studying Cultural Aspects in Agile Software Development
- URL: http://arxiv.org/abs/2404.17009v1
- Date: Thu, 25 Apr 2024 20:08:37 GMT
- Title: What You Use is What You Get: Unforced Errors in Studying Cultural Aspects in Agile Software Development
- Authors: Michael Neumann, Klaus Schmid, Lars Baumann,
- Abstract summary: Investigating the influence of cultural characteristics is challenging due to the multi-faceted concept of culture.
Cultural and social aspects are of high importance for their successful use in practice.
- Score: 2.9418191027447906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context: Cultural aspects are of high importance as they guide people's behaviour and thus, influence how people apply methods and act in projects. In recent years, software engineering research emphasized the need to analyze the challenges of specific cultural characteristics. Investigating the influence of cultural characteristics is challenging due to the multi-faceted concept of culture. People's behaviour, their beliefs and underlying values are shaped by different layers of culture, e.g., regions, organizations, or groups. In this study, we focus on agile methods, which are agile approaches that focus on underlying values, collaboration and communication. Thus, cultural and social aspects are of high importance for their successful use in practice. Objective: In this paper, we address challenges that arise when using the model of cultural dimensions by Hofstede to characterize specific cultural values. This model is often used when discussing cultural influences in software engineering. Method: As a basis, we conducted an exploratory, multiple case study, consisting of two cases in Japan and two in Germany. Contributions: In this study, we observed that cultural characteristics of the participants differed significantly from cultural characteristics that would typically be expected for people from the respective country. This drives our conclusion that for studies in empirical software engineering that address cultural factors, a case-specific analysis of the characteristics is needed.
Related papers
- CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific Concepts [45.77570690529597]
We introduce CROPE, a visual question answering benchmark designed to probe the knowledge of culture-specific concepts.
Our evaluation of several state-of-the-art open Vision and Language models shows large performance disparities between culture-specific and common concepts.
Experiments with contextual knowledge indicate that models struggle to effectively utilize multimodal information and bind culture-specific concepts to their depictions.
arXiv Detail & Related papers (2024-10-20T17:31:19Z) - Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models [4.771099208181585]
LLMs are increasingly deployed in global applications, ensuring users from diverse backgrounds feel respected and understood.
Cultural harm can arise when these models fail to align with specific cultural norms, resulting in misrepresentations or violations of cultural values.
We present two key contributions: A cultural harm test dataset, created to assess model outputs across different cultural contexts through scenarios that expose potential cultural insensitivities, and a culturally aligned preference dataset, aimed at restoring cultural sensitivity through fine-tuning based on feedback from diverse annotators.
arXiv Detail & Related papers (2024-10-15T18:13:10Z) - Extrinsic Evaluation of Cultural Competence in Large Language Models [53.626808086522985]
We focus on extrinsic evaluation of cultural competence in two text generation tasks.
We evaluate model outputs when an explicit cue of culture, specifically nationality, is perturbed in the prompts.
We find weak correlations between text similarity of outputs for different countries and the cultural values of these countries.
arXiv Detail & Related papers (2024-06-17T14:03:27Z) - CulturePark: Boosting Cross-cultural Understanding in Large Language Models [63.452948673344395]
This paper introduces CulturePark, an LLM-powered multi-agent communication framework for cultural data collection.
It generates high-quality cross-cultural dialogues encapsulating human beliefs, norms, and customs.
We evaluate these models across three downstream tasks: content moderation, cultural alignment, and cultural education.
arXiv Detail & Related papers (2024-05-24T01:49:02Z) - Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense [98.09670425244462]
Large language models (LLMs) have demonstrated substantial commonsense understanding.
This paper examines the capabilities and limitations of several state-of-the-art LLMs in the context of cultural commonsense tasks.
arXiv Detail & Related papers (2024-05-07T20:28:34Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z) - Navigating Cultural Diversity: Barriers and Potentials in Multicultural
Agile Software Development Teams [2.2044574002571182]
The aim of this study is to identify barriers and potentials that may arise in multicultural agile software development teams.
Our results suggest that the cultural characteristics at the team level need to be analyzed individually in intercultural teams.
Third, we derived strategies supporting the potentials of cultural diversity in agile software development teams.
arXiv Detail & Related papers (2023-11-18T19:27:48Z) - Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in
Large Language Models [89.94270049334479]
This paper identifies a cultural dominance issue within large language models (LLMs)
LLMs often provide inappropriate English-culture-related answers that are not relevant to the expected culture when users ask in non-English languages.
arXiv Detail & Related papers (2023-10-19T05:38:23Z) - Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions [10.415002561977655]
This research proposes a Cultural Alignment Test (Hoftede's CAT) to quantify cultural alignment using Hofstede's cultural dimension framework.
We quantitatively evaluate large language models (LLMs) against the cultural dimensions of regions like the United States, China, and Arab countries.
Our results quantify the cultural alignment of LLMs and reveal the difference between LLMs in explanatory cultural dimensions.
arXiv Detail & Related papers (2023-08-25T14:50:13Z) - Designing Culturally Aware Learning Analytics: A Value Sensitive
Perspective [0.0]
This chapter aims to stress the importance of addressing culture when designing and implementing learning analytics services.
We argue for a need to carefully consider one of these factors, namely cultural values when designing and implementing learning analytics systems.
arXiv Detail & Related papers (2022-12-19T17:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.