Visually Grounded Reasoning across Languages and Cultures
- URL: http://arxiv.org/abs/2109.13238v1
- Date: Tue, 28 Sep 2021 16:51:38 GMT
- Title: Visually Grounded Reasoning across Languages and Cultures
- Authors: Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy,
Nigel Collier, Desmond Elliott
- Abstract summary: We develop a new protocol to construct an ImageNet-style hierarchy representative of more languages and cultures.
We focus on a typologically diverse set of languages, namely, Indonesian, Mandarin Chinese, Swahili, Tamil, and Turkish.
We create a multilingual dataset for Multicultural Reasoning over Vision and Language (MaRVL) by eliciting statements from native speaker annotators about pairs of images.
- Score: 27.31020761908739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The design of widespread vision-and-language datasets and pre-trained
encoders directly adopts, or draws inspiration from, the concepts and images of
ImageNet. While one can hardly overestimate how much this benchmark contributed
to progress in computer vision, it is mostly derived from lexical databases and
image queries in English, resulting in source material with a North American or
Western European bias. Therefore, we devise a new protocol to construct an
ImageNet-style hierarchy representative of more languages and cultures. In
particular, we let the selection of both concepts and images be entirely driven
by native speakers, rather than scraping them automatically. Specifically, we
focus on a typologically diverse set of languages, namely, Indonesian, Mandarin
Chinese, Swahili, Tamil, and Turkish. On top of the concepts and images
obtained through this new protocol, we create a multilingual dataset for
{M}ulticultur{a}l {R}easoning over {V}ision and {L}anguage (MaRVL) by eliciting
statements from native speaker annotators about pairs of images. The task
consists of discriminating whether each grounded statement is true or false. We
establish a series of baselines using state-of-the-art models and find that
their cross-lingual transfer performance lags dramatically behind supervised
performance in English. These results invite us to reassess the robustness and
accuracy of current state-of-the-art models beyond a narrow domain, but also
open up new exciting challenges for the development of truly multilingual and
multicultural systems.
Related papers
- Multilingual Diversity Improves Vision-Language Representations [66.41030381363244]
Pre-training on this dataset outperforms using English-only or English-dominated datasets on ImageNet.
On a geographically diverse task like GeoDE, we also observe improvements across all regions, with the biggest gain coming from Africa.
arXiv Detail & Related papers (2024-05-27T08:08:51Z) - An image speaks a thousand words, but can everyone listen? On image transcreation for cultural relevance [53.974497865647336]
We take a first step towards translating images to make them culturally relevant.
We build three pipelines comprising state-of-the-art generative models to do the task.
We conduct a human evaluation of translated images to assess for cultural relevance and meaning preservation.
arXiv Detail & Related papers (2024-04-01T17:08:50Z) - Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language Representations [53.89380284760555]
We introduce Babel-ImageNet, a massively multilingual benchmark that offers partial translations of ImageNet labels to 100 languages.
We evaluate 11 public multilingual CLIP models on our benchmark, demonstrating a significant gap between English ImageNet performance and that of high-resource languages.
We show that the performance of multilingual CLIP can be drastically improved for low-resource languages with parameter-efficient language-specific training.
arXiv Detail & Related papers (2023-06-14T17:53:06Z) - Multilingual Conceptual Coverage in Text-to-Image Models [98.80343331645626]
"Conceptual Coverage Across Languages" (CoCo-CroLa) is a technique for benchmarking the degree to which any generative text-to-image system provides multilingual parity to its training language in terms of tangible nouns.
For each model we can assess "conceptual coverage" of a given target language relative to a source language by comparing the population of images generated for a series of tangible nouns in the source language to the population of images generated for each noun under translation in the target language.
arXiv Detail & Related papers (2023-06-02T17:59:09Z) - UC2: Universal Cross-lingual Cross-modal Vision-and-Language
Pre-training [52.852163987208826]
UC2 is the first machine translation-augmented framework for cross-lingual cross-modal representation learning.
We propose two novel pre-training tasks, namely Masked Region-to-Token Modeling (MRTM) and Visual Translation Language Modeling (VTLM)
Our proposed framework achieves new state-of-the-art on diverse non-English benchmarks while maintaining comparable performance to monolingual pre-trained models on English tasks.
arXiv Detail & Related papers (2021-04-01T08:30:53Z) - Vokenization: Improving Language Understanding with Contextualized,
Visual-Grounded Supervision [110.66085917826648]
We develop a technique that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images.
"vokenization" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora.
Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks.
arXiv Detail & Related papers (2020-10-14T02:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.