Modeling Sequential Sentence Relation to Improve Cross-lingual Dense
Retrieval
- URL: http://arxiv.org/abs/2302.01626v1
- Date: Fri, 3 Feb 2023 09:54:27 GMT
- Title: Modeling Sequential Sentence Relation to Improve Cross-lingual Dense
Retrieval
- Authors: Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan
- Abstract summary: We propose a multilingual multilingual language model called masked sentence model (MSM)
MSM consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.
To train the model, we propose a masked sentence prediction task, which masks and predicts the sentence vector via a hierarchical contrastive loss with sampled negatives.
- Score: 87.11836738011007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently multi-lingual pre-trained language models (PLM) such as mBERT and
XLM-R have achieved impressive strides in cross-lingual dense retrieval.
Despite its successes, they are general-purpose PLM while the multilingual PLM
tailored for cross-lingual retrieval is still unexplored. Motivated by an
observation that the sentences in parallel documents are approximately in the
same order, which is universal across languages, we propose to model this
sequential sentence relation to facilitate cross-lingual representation
learning. Specifically, we propose a multilingual PLM called masked sentence
model (MSM), which consists of a sentence encoder to generate the sentence
representations, and a document encoder applied to a sequence of sentence
vectors from a document. The document encoder is shared for all languages to
model the universal sequential sentence relation across languages. To train the
model, we propose a masked sentence prediction task, which masks and predicts
the sentence vector via a hierarchical contrastive loss with sampled negatives.
Comprehensive experiments on four cross-lingual retrieval tasks show MSM
significantly outperforms existing advanced pre-training models, demonstrating
the effectiveness and stronger cross-lingual retrieval capabilities of our
approach. Code and model will be available.
Related papers
- Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Bridging Cross-Lingual Gaps During Leveraging the Multilingual
Sequence-to-Sequence Pretraining for Text Generation [80.16548523140025]
We extend the vanilla pretrain-finetune pipeline with extra code-switching restore task to bridge the gap between the pretrain and finetune stages.
Our approach could narrow the cross-lingual sentence representation distance and improve low-frequency word translation with trivial computational cost.
arXiv Detail & Related papers (2022-04-16T16:08:38Z) - On Cross-Lingual Retrieval with Multilingual Text Encoders [51.60862829942932]
We study the suitability of state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks.
We benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR experiments.
We evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments.
arXiv Detail & Related papers (2021-12-21T08:10:27Z) - Mixed Attention Transformer for LeveragingWord-Level Knowledge to Neural
Cross-Lingual Information Retrieval [15.902630454568811]
We propose a novel Mixed Attention Transformer (MAT) that incorporates external word level knowledge, such as a dictionary or translation table.
By encoding the translation knowledge into an attention matrix, the model with MAT is able to focus on the mutually translated words in the input sequence.
arXiv Detail & Related papers (2021-09-07T00:33:14Z) - Universal Sentence Representation Learning with Conditional Masked
Language Model [7.334766841801749]
We present Conditional Masked Language Modeling (M) to effectively learn sentence representations.
Our English CMLM model achieves state-of-the-art performance on SentEval.
As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains.
arXiv Detail & Related papers (2020-12-28T18:06:37Z) - FILTER: An Enhanced Fusion Method for Cross-lingual Language
Understanding [85.29270319872597]
We propose an enhanced fusion method that takes cross-lingual data as input for XLM finetuning.
During inference, the model makes predictions based on the text input in the target language and its translation in the source language.
To tackle this issue, we propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language.
arXiv Detail & Related papers (2020-09-10T22:42:15Z) - Pre-training via Paraphrasing [96.79972492585112]
We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual paraphrasing objective.
We show it is possible to jointly learn to do retrieval and reconstruction, given only a random initialization.
For example, with no additional task-specific training we achieve BLEU scores of up to 35.8 for document translation.
arXiv Detail & Related papers (2020-06-26T14:43:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.