Exploring Moral Principles Exhibited in OSS: A Case Study on GitHub
Heated Issues
- URL: http://arxiv.org/abs/2307.15631v1
- Date: Fri, 28 Jul 2023 15:42:10 GMT
- Title: Exploring Moral Principles Exhibited in OSS: A Case Study on GitHub
Heated Issues
- Authors: Ramtin Ehsani, Rezvaneh Rezapour, Preetha Chatterjee
- Abstract summary: We analyze toxic communications in GitHub issue threads to identify and understand five types of moral principles exhibited in text.
Preliminary findings suggest a possible link between moral principles and toxic comments in OSS communications.
- Score: 5.659436621527968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To foster collaboration and inclusivity in Open Source Software (OSS)
projects, it is crucial to understand and detect patterns of toxic language
that may drive contributors away, especially those from underrepresented
communities. Although machine learning-based toxicity detection tools trained
on domain-specific data have shown promise, their design lacks an understanding
of the unique nature and triggers of toxicity in OSS discussions, highlighting
the need for further investigation. In this study, we employ Moral Foundations
Theory to examine the relationship between moral principles and toxicity in
OSS. Specifically, we analyze toxic communications in GitHub issue threads to
identify and understand five types of moral principles exhibited in text, and
explore their potential association with toxic behavior. Our preliminary
findings suggest a possible link between moral principles and toxic comments in
OSS communications, with each moral principle associated with at least one type
of toxicity. The potential of MFT in toxicity detection warrants further
investigation.
Related papers
- The Landscape of Toxicity: An Empirical Investigation of Toxicity on GitHub [3.0586855806896054]
profanity is the most frequent toxicity on GitHub, followed by trolling and insults.
Corporate-sponsored projects are less toxic, but gaming projects are seven times more toxic than non-gaming ones.
OSS contributors who have authored toxic comments in the past are significantly more likely to repeat such behavior.
arXiv Detail & Related papers (2025-02-12T09:24:59Z) - CogMorph: Cognitive Morphing Attacks for Text-to-Image Models [65.38747950692752]
This paper reveals a significant and previously unrecognized ethical risk inherent in text-to-image (T2I) generative models.
We introduce a novel method, termed the Cognitive Morphing Attack (CogMorph), which manipulates T2I models to generate images that retain the original core subjects but embeds toxic or harmful contextual elements.
arXiv Detail & Related papers (2025-01-21T01:45:56Z) - Analyzing Toxicity in Open Source Software Communications Using Psycholinguistics and Moral Foundations Theory [5.03553492616371]
This paper investigates a machine learning-based approach for the automatic detection of toxic communications in Open Source Software (OSS)
We leverage psycholinguistic lexicons, and Moral Foundations Theory to analyze toxicity in two types of OSS communication channels; issue comments and code reviews.
Using moral values as features is more effective than linguistic cues, resulting in 67.50% F1-measure in identifying toxic instances in code review data and 64.83% in issue comments.
arXiv Detail & Related papers (2024-12-17T17:52:00Z) - Enhancing LLM-based Hatred and Toxicity Detection with Meta-Toxic Knowledge Graph [36.07351851458233]
The absence of domain-specific toxic knowledge leads to false negatives.
The excessive sensitivity of Large Language Models to toxic speech results in false positives.
We propose a novel method called MetaTox, leveraging graph search on a meta-toxic knowledge graph to enhance hatred and toxicity detection.
arXiv Detail & Related papers (2024-12-17T06:28:28Z) - Exploring ChatGPT for Toxicity Detection in GitHub [5.003898791753481]
The prevalence of negative discourse, often manifested as toxic comments, poses significant challenges to developer well-being and productivity.
To identify such negativity in project communications, automated toxicity detection models are necessary.
To train these models effectively, we need large software engineering-specific toxicity datasets.
arXiv Detail & Related papers (2023-12-20T15:23:00Z) - Unveiling the Implicit Toxicity in Large Language Models [77.90933074675543]
The open-endedness of large language models (LLMs) combined with their impressive capabilities may lead to new safety issues when being exploited for malicious use.
We show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting.
We propose a reinforcement learning (RL) based attacking method to further induce the implicit toxicity in LLMs.
arXiv Detail & Related papers (2023-11-29T06:42:36Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - ToxiSpanSE: An Explainable Toxicity Detection in Code Review Comments [4.949881799107062]
ToxiSpanSE is the first tool to detect toxic spans in the Software Engineering (SE) domain.
Our model achieved the best score with 0.88 $F1$, 0.87 precision, and 0.93 recall for toxic class tokens.
arXiv Detail & Related papers (2023-07-07T04:55:11Z) - Toxicity Detection can be Sensitive to the Conversational Context [64.28043776806213]
We construct and publicly release a dataset of 10,000 posts with two kinds of toxicity labels.
We introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context is also considered.
arXiv Detail & Related papers (2021-11-19T13:57:26Z) - Mitigating Biases in Toxic Language Detection through Invariant
Rationalization [70.36701068616367]
biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection.
We propose to use invariant rationalization (InvRat), a game-theoretic framework consisting of a rationale generator and a predictor, to rule out the spurious correlation of certain syntactic patterns.
Our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods.
arXiv Detail & Related papers (2021-06-14T08:49:52Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.