Assessing Multilingual Fairness in Pre-trained Multimodal
Representations
- URL: http://arxiv.org/abs/2106.06683v1
- Date: Sat, 12 Jun 2021 03:57:05 GMT
- Title: Assessing Multilingual Fairness in Pre-trained Multimodal
Representations
- Authors: Jialu Wang, Yang Liu, Xin Eric Wang
- Abstract summary: We argue that pre-trained vision-and-language representations are individually fair across languages but not guaranteed to group fairness.
We conduct experiments to explore the prevalent group disparity across languages and protected groups including race, gender and age.
- Score: 8.730027941735804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently pre-trained multimodal models, such as CLIP, have received a surge
of attention for their exceptional capabilities towards connecting images and
natural language. The textual representations in English can be desirably
transferred to multilingualism and support promising downstream multimodal
tasks for different languages. Nevertheless, previous fairness discourse in
vision-and-language learning mainly focuses on monolingual representational
biases, and rarely scrutinizes the principles of multilingual fairness in this
multimodal setting, where one language is equated to a group of individuals and
images provide the universal grounding for bridging different languages.
In this paper, we provide a nuanced understanding of individual fairness and
group fairness by viewing language as the recipient of fairness notions. We
define new fairness notions within multilingual context and analytically
articulate that, pre-trained vision-and-language representations are
individually fair across languages but not guaranteed to group fairness.
Furthermore, we conduct extensive experiments to explore the prevalent group
disparity across languages and protected groups including race, gender and age.
Related papers
- Exploring Multilingual Concepts of Human Value in Large Language Models: Is Value Alignment Consistent, Transferable and Controllable across Languages? [34.38469832305664]
This paper focuses on human values-related concepts (i.e., value concepts) due to their significance for AI safety.
We first empirically confirm the presence of value concepts within LLMs in a multilingual format.
Further analysis on the cross-lingual characteristics of these concepts reveals 3 traits arising from language resource disparities.
arXiv Detail & Related papers (2024-02-28T07:18:39Z) - RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training [84.23022072347821]
We propose a regularized cross-lingual visio-textual contrastive learning objective that constrains the representation proximity of weakly-aligned visio-textual inputs.
Experiments on 5 downstream multi-modal tasks across 6 languages demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-05-13T14:41:05Z) - Discovering Representation Sprachbund For Multilingual Pre-Training [139.05668687865688]
We generate language representation from multilingual pre-trained models and conduct linguistic analysis.
We cluster all the target languages into multiple groups and name each group as a representation sprachbund.
Experiments are conducted on cross-lingual benchmarks and significant improvements are achieved compared to strong baselines.
arXiv Detail & Related papers (2021-09-01T09:32:06Z) - UC2: Universal Cross-lingual Cross-modal Vision-and-Language
Pre-training [52.852163987208826]
UC2 is the first machine translation-augmented framework for cross-lingual cross-modal representation learning.
We propose two novel pre-training tasks, namely Masked Region-to-Token Modeling (MRTM) and Visual Translation Language Modeling (VTLM)
Our proposed framework achieves new state-of-the-art on diverse non-English benchmarks while maintaining comparable performance to monolingual pre-trained models on English tasks.
arXiv Detail & Related papers (2021-04-01T08:30:53Z) - M3P: Learning Universal Representations via Multitask Multilingual
Multimodal Pre-training [119.16007395162431]
M3P is a Multilingual Multimodal Pre-trained model that combines multilingual pre-training and multimodal pre-training.
We show that M3P can achieve comparable results for English and new state-of-the-art results for non-English languages.
arXiv Detail & Related papers (2020-06-04T03:54:29Z) - Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer [101.58431011820755]
We study gender bias in multilingual embeddings and how it affects transfer learning for NLP applications.
We create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations.
arXiv Detail & Related papers (2020-05-02T04:34:37Z) - Identifying Distributional Perspective Differences from Colingual Groups [41.58939666949895]
A lack of mutual understanding among different groups about their perspectives on specific values or events may lead to uninformed decisions or biased opinions.
We study colingual groups and use language corpora as a proxy to identify their distributional perspectives.
We present a novel computational approach to learn shared understandings, and benchmark our method by building culturally-aware models for the English, Chinese, and Japanese languages.
arXiv Detail & Related papers (2020-04-10T08:13:07Z) - On the Language Neutrality of Pre-trained Multilingual Representations [70.93503607755055]
We investigate the language-neutrality of multilingual contextual embeddings directly and with respect to lexical semantics.
Our results show that contextual embeddings are more language-neutral and, in general, more informative than aligned static word-type embeddings.
We show how to reach state-of-the-art accuracy on language identification and match the performance of statistical methods for word alignment of parallel sentences.
arXiv Detail & Related papers (2020-04-09T19:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.