Determination of toxic comments and unintended model bias minimization
using Deep learning approach
- URL: http://arxiv.org/abs/2311.04789v1
- Date: Wed, 8 Nov 2023 16:10:28 GMT
- Title: Determination of toxic comments and unintended model bias minimization
using Deep learning approach
- Authors: Md Azim Khan
- Abstract summary: In this research, our aim is to detect toxic comment and reduce the unintended bias concerning identity features such as race, gender, sex, religion by fine-tuning an attention based model called BERT(Bidirectional Representation from Transformers)
We apply weighted loss to address the issue of unbalanced data and compare the performance of a fine-tuned BERT model with a traditional Logistic Regression model in terms of classification and bias minimization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online conversations can be toxic and subjected to threats, abuse, or
harassment. To identify toxic text comments, several deep learning and machine
learning models have been proposed throughout the years. However, recent
studies demonstrate that because of the imbalances in the training data, some
models are more likely to show unintended biases including gender bias and
identity bias. In this research, our aim is to detect toxic comment and reduce
the unintended bias concerning identity features such as race, gender, sex,
religion by fine-tuning an attention based model called BERT(Bidirectional
Encoder Representation from Transformers). We apply weighted loss to address
the issue of unbalanced data and compare the performance of a fine-tuned BERT
model with a traditional Logistic Regression model in terms of classification
and bias minimization. The Logistic Regression model with the TFIDF vectorizer
achieve 57.1% accuracy, and fine-tuned BERT model's accuracy is 89%. Code is
available at
https://github.com/zim10/Determine_Toxic_comment_and_identity_bias.git
Related papers
- Crowdsourcing with Difficulty: A Bayesian Rating Model for Heterogeneous Items [0.716879432974126]
In applied statistics and machine learning, the "gold standards" used for training are often biased and almost always noisy.
Dawid and Skene's justifiably popular crowdsourcing model adjusts for rater (coder, annotator) sensitivity and specificity, but fails to capture distributional properties of rating data gathered for training.
We introduce a general purpose measurement-error model with which we can infer consensus categories by adding item-level effects for difficulty, discriminativeness, and guessability.
arXiv Detail & Related papers (2024-05-29T20:59:28Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Detecting and Mitigating Algorithmic Bias in Binary Classification using
Causal Modeling [0.0]
We show that gender bias in the prediction model is statistically significant at the 0.05 level.
We demonstrate the effectiveness of the causal model in mitigating gender bias by cross-validation.
Our novel approach is intuitive, easy-to-use, and can be implemented using existing statistical software tools such as "lavaan" in R.
arXiv Detail & Related papers (2023-10-19T02:21:04Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Investigating Bias In Automatic Toxic Comment Detection: An Empirical
Study [1.5609988622100528]
With surge in online platforms, there has been an upsurge in the user engagement on these platforms via comments and reactions.
A large portion of such textual comments are abusive, rude and offensive to the audience.
With machine learning systems in-place to check such comments coming onto platform, biases present in the training data gets passed onto the classifier leading to discrimination against a set of classes, religion and gender.
arXiv Detail & Related papers (2021-08-14T08:24:13Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Probing Model Signal-Awareness via Prediction-Preserving Input
Minimization [67.62847721118142]
We evaluate models' ability to capture the correct vulnerability signals to produce their predictions.
We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR)
The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric.
arXiv Detail & Related papers (2020-11-25T20:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.