Using Personality Detection Tools for Software Engineering Research: How
Far Can We Go?
- URL: http://arxiv.org/abs/2110.05035v1
- Date: Mon, 11 Oct 2021 07:02:34 GMT
- Title: Using Personality Detection Tools for Software Engineering Research: How
Far Can We Go?
- Authors: Fabio Calefato and Filippo Lanubile
- Abstract summary: Self-assessment questionnaires are not a practical solution for collecting multiple observations on a large scale.
Off-the-shelf solutions trained on non-technical corpora might not be readily applicable to technical domains like Software Engineering.
- Score: 12.56413718364189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assessing the personality of software engineers may help to match individual
traits with the characteristics of development activities such as code review
and testing, as well as support managers in team composition. However,
self-assessment questionnaires are not a practical solution for collecting
multiple observations on a large scale. Instead, automatic personality
detection, while overcoming these limitations, is based on off-the-shelf
solutions trained on non-technical corpora, which might not be readily
applicable to technical domains like Software Engineering (SE). In this paper,
we first assess the performance of general-purpose personality detection tools
when applied to a technical corpus of developers' emails retrieved from the
public archives of the Apache Software Foundation. We observe a general low
accuracy of predictions and an overall disagreement among the tools. Second, we
replicate two previous research studies in SE by replacing the personality
detection tool used to infer developers' personalities from pull-request
discussions and emails. We observe that the original results are not confirmed,
i.e., changing the tool used in the original study leads to diverging
conclusions. Our results suggest a need for personality detection tools
specially targeted for the software engineering domain.
Related papers
- Which Combination of Test Metrics Can Predict Success of a Software Project? A Case Study in a Year-Long Project Course [1.553083901660282]
Testing plays an important role in securing the success of a software development project.
We investigate whether we can quantify the effects various types of testing have on functional suitability.
arXiv Detail & Related papers (2024-08-22T04:23:51Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Efficacy of static analysis tools for software defect detection on open-source projects [0.0]
The study used popular analysis tools such as SonarQube, PMD, Checkstyle, and FindBugs to perform the comparison.
The study results show that SonarQube performs considerably well than all other tools in terms of its defect detection.
arXiv Detail & Related papers (2024-05-20T19:05:32Z) - Automated User Story Generation with Test Case Specification Using Large Language Model [0.0]
We developed a tool "GeneUS" to automatically create user stories from requirements documents.
The output is provided in format leaving the possibilities open for downstream integration to the popular project management tools.
arXiv Detail & Related papers (2024-04-02T01:45:57Z) - What Are Tools Anyway? A Survey from the Language Model Perspective [67.18843218893416]
Language models (LMs) are powerful yet mostly for text generation tasks.
We provide a unified definition of tools as external programs used by LMs.
We empirically study the efficiency of various tooling methods.
arXiv Detail & Related papers (2024-03-18T17:20:07Z) - TOOLVERIFIER: Generalization to New Tools via Self-Verification [69.85190990517184]
We introduce a self-verification method which distinguishes between close candidates by self-asking contrastive questions during tool selection.
Experiments on 4 tasks from the ToolBench benchmark, consisting of 17 unseen tools, demonstrate an average improvement of 22% over few-shot baselines.
arXiv Detail & Related papers (2024-02-21T22:41:38Z) - Automated Grading and Feedback Tools for Programming Education: A
Systematic Review [7.776434991976473]
Most papers assess the correctness of assignments in object-oriented languages.
Few tools assess the maintainability, readability or documentation of the source code.
Most tools offered fully automated assessment to allow for near-instantaneous feedback.
arXiv Detail & Related papers (2023-06-20T17:54:50Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - Designing Tools for Semi-Automated Detection of Machine Learning Biases:
An Interview Study [18.05880738470364]
We report on an interview study with 11 machine learning practitioners for investigating the needs surrounding semi-automated bias detection tools.
Based on the findings, we highlight four considerations in designing to guide system designers who aim to create future tools for bias detection.
arXiv Detail & Related papers (2020-03-13T00:18:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.