PowerTransformer: Unsupervised Controllable Revision for Biased Language
Correction
- URL: http://arxiv.org/abs/2010.13816v1
- Date: Mon, 26 Oct 2020 18:05:48 GMT
- Title: PowerTransformer: Unsupervised Controllable Revision for Biased Language
Correction
- Authors: Xinyao Ma, Maarten Sap, Hannah Rashkin, Yejin Choi
- Abstract summary: We formulate a new revision task that aims to rewrite a given text to correct the implicit and potentially undesirable bias in character portrayals.
We introduce PowerTransformer as an approach that debiases text through the lens of connotation frames.
We demonstrate that our approach outperforms ablations and existing methods from related tasks.
- Score: 62.46299488465803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unconscious biases continue to be prevalent in modern text and media, calling
for algorithms that can assist writers with bias correction. For example, a
female character in a story is often portrayed as passive and powerless ("She
daydreams about being a doctor") while a man is portrayed as more proactive and
powerful ("He pursues his dream of being a doctor").
We formulate *Controllable Debiasing*, a new revision task that aims to
rewrite a given text to correct the implicit and potentially undesirable bias
in character portrayals. We then introduce PowerTransformer as an approach that
debiases text through the lens of connotation frames (Sap et al., 2017), which
encode pragmatic knowledge of implied power dynamics with respect to verb
predicates. One key challenge of our task is the lack of parallel corpora. To
address this challenge, we adopt an unsupervised approach using auxiliary
supervision with related tasks such as paraphrasing and self-supervision based
on a reconstruction loss, building on pretrained language models.
Through comprehensive experiments based on automatic and human evaluations,
we demonstrate that our approach outperforms ablations and existing methods
from related tasks. Furthermore, we demonstrate the use of PowerTransformer as
a step toward mitigating the well-documented gender bias in character portrayal
in movie scripts.
Related papers
- Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - TVTSv2: Learning Out-of-the-box Spatiotemporal Visual Representations at
Scale [59.01246141215051]
We analyze the factor that leads to degradation from the perspective of language supervision.
We propose a tunable-free pre-training strategy to retain the generalization ability of the text encoder.
We produce a series of models, dubbed TVTSv2, with up to one billion parameters.
arXiv Detail & Related papers (2023-05-23T15:44:56Z) - Think Before You Act: Unified Policy for Interleaving Language Reasoning
with Actions [21.72567982148215]
We show how to train transformers with a similar next-step prediction objective on offline data.
We propose a novel method for unifying language reasoning with actions in a single policy.
Specifically, we augment a transformer policy with word outputs, so it can generate textual captions interleaved with actions.
arXiv Detail & Related papers (2023-04-18T16:12:38Z) - Future Sight: Dynamic Story Generation with Large Pretrained Language
Models [11.23192733149335]
Transformer decoders can only generate new text with respect to previously generated text.
Future Sight enables a decoder to attend to an encoded future plot event.
During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction.
arXiv Detail & Related papers (2022-12-20T01:53:26Z) - Hollywood Identity Bias Dataset: A Context Oriented Bias Analysis of
Movie Dialogues [20.222820874864748]
Social biases and stereotypes present in movies can cause extensive damage due to their reach.
We introduce a new dataset of movie scripts that are annotated for identity bias.
The dataset contains dialogue turns annotated for (i) bias labels for seven categories, viz., gender, race/ethnicity, religion, age, occupation, LGBTQ, and other.
arXiv Detail & Related papers (2022-05-31T16:49:51Z) - TVShowGuess: Character Comprehension in Stories as Speaker Guessing [23.21452223968301]
We propose a new task for assessing machines' skills of understanding fictional characters in narrative stories.
The task, TVShowGuess, builds on the scripts of TV series and takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues.
Our human study supports that this form of task covers comprehension of multiple types of character persona, including understanding characters' personalities, facts and memories of personal experience.
arXiv Detail & Related papers (2022-04-16T05:15:04Z) - Probing as Quantifying the Inductive Bias of Pre-trained Representations [99.93552997506438]
We present a novel framework for probing where the goal is to evaluate the inductive bias of representations for a particular task.
We apply our framework to a series of token-, arc-, and sentence-level tasks.
arXiv Detail & Related papers (2021-10-15T22:01:16Z) - Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference [21.18458377708873]
We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
arXiv Detail & Related papers (2021-09-14T04:57:45Z) - Back to the Future: Unsupervised Backprop-based Decoding for
Counterfactual and Abductive Commonsense Reasoning [79.48769764508006]
generative language models (LMs) can be trained to condition only on the past context or to perform narrowly scoped text-infilling.
We propose DeLorean, a new unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts.
We demonstrate that our approach is general and applicable to two nonmonotonic reasoning tasks: abductive text generation and counterfactual story revision.
arXiv Detail & Related papers (2020-10-12T17:58:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.