Characterizing Role Models in Software Practitioners' Career: An
Interview Study
- URL: http://arxiv.org/abs/2402.09925v1
- Date: Thu, 15 Feb 2024 13:11:07 GMT
- Title: Characterizing Role Models in Software Practitioners' Career: An
Interview Study
- Authors: Mary S\'anchez-Gord\'on, Ricardo Colomo-Palacios and Alex Sanchez
Gordon
- Abstract summary: Authors study how role models influence software practitioners careers.
Findings reveal that role models were perceived as sources of knowledge.
This study also shows that any practitioner can be viewed as a role model.
- Score: 7.76651569964928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A role model is a person who serves as an example for others to follow,
especially in terms of values, behavior, achievements, and personal
characteristics. In this paper, authors study how role models influence
software practitioners careers, an aspect not studied in the literature before.
By means of this study, authors aim to understand if there are any salient role
model archetypes and what characteristics are valued by participants in their
role models. To do so, authors use a thematic coding approach to analyze the
data collected from interviewing ten Latin American software practitioners.
Findings reveal that role models were perceived as sources of knowledge, yet
the majority of participants, regardless of their career stage, displayed a
stronger interest in the human side and the moral values that their role models
embodied. This study also shows that any practitioner can be viewed as a role
model.
Related papers
- Gender Bias in Instruction-Guided Speech Synthesis Models [55.2480439325792]
This study investigates the potential gender bias in how models interpret occupation-related prompts.
We explore whether these models exhibit tendencies to amplify gender stereotypes when interpreting such prompts.
Our experimental results reveal the model's tendency to exhibit gender bias for certain occupations.
arXiv Detail & Related papers (2025-02-08T17:38:24Z) - Diversity in Software Engineering Education: Exploring Motivations, Influences, and Role Models Among Undergraduate Students [0.0]
Software engineering (SE) faces significant diversity challenges in both academia and industry.
Despite significant research on the exclusion experienced by students from underrepresented groups in SE education, there is limited understanding of the specific motivations, influences, and role models that drive underrepresented students to pursue and persist in the field.
This study explores the motivations and influences shaping the career aspirations of students from underrepresented groups in SE.
arXiv Detail & Related papers (2024-12-16T22:14:10Z) - Thinking Before Speaking: A Role-playing Model with Mindset [0.6428333375712125]
Large Language Models (LLMs) are skilled at simulating human behaviors.
These models tend to perform poorly when confronted with knowledge that the assumed role does not possess.
We propose a Thinking Before Speaking (TBS) model in this paper.
arXiv Detail & Related papers (2024-09-14T02:41:48Z) - Towards "Differential AI Psychology" and in-context Value-driven Statement Alignment with Moral Foundations Theory [0.0]
This work investigates the alignment between personalized language models and survey participants on a Moral Foundation questionnaire.
We adapt text-to-text models to different political personas and survey the questionnaire repetitively to generate a synthetic population of persona and model combinations.
Our findings indicate that adapted models struggle to represent the survey-leading assessment of political ideologies.
arXiv Detail & Related papers (2024-08-21T08:20:41Z) - The Oscars of AI Theater: A Survey on Role-Playing with Language Models [38.68597594794648]
This survey explores the burgeoning field of role-playing with language models.
It focuses on their development from early persona-based models to advanced character-driven simulations facilitated by Large Language Models (LLMs)
We provide a comprehensive taxonomy of the critical components in designing these systems, including data, models and alignment, agent architecture and evaluation.
arXiv Detail & Related papers (2024-07-16T08:20:39Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews [57.04431594769461]
This paper introduces a novel perspective to evaluate the personality fidelity of RPAs with psychological scales.
Experiments include various types of RPAs and LLMs, covering 32 distinct characters on 14 widely used psychological scales.
With InCharacter, we show that state-of-the-art RPAs exhibit personalities highly aligned with the human-perceived personalities of the characters, achieving an accuracy up to 80.7%.
arXiv Detail & Related papers (2023-10-27T08:42:18Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Estimating the Personality of White-Box Language Models [0.589889361990138]
Large-scale language models, which are trained on large corpora of text, are being used in a wide range of applications everywhere.
Existing research shows that these models can and do capture human biases.
Many of these biases, especially those that could potentially cause harm, are being well-investigated.
However, studies that infer and change human personality traits inherited by these models have been scarce or non-existent.
arXiv Detail & Related papers (2022-04-25T23:53:53Z) - Automatic Main Character Recognition for Photographic Studies [78.88882860340797]
Main characters in images are the most important humans that catch the viewer's attention upon first look.
Identifying the main character in images plays an important role in traditional photographic studies and media analysis.
We propose a method for identifying the main characters using machine learning based human pose estimation.
arXiv Detail & Related papers (2021-06-16T18:14:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.