Auto-MLM: Improved Contrastive Learning for Self-supervised
Multi-lingual Knowledge Retrieval
- URL: http://arxiv.org/abs/2203.16187v1
- Date: Wed, 30 Mar 2022 10:13:57 GMT
- Title: Auto-MLM: Improved Contrastive Learning for Self-supervised
Multi-lingual Knowledge Retrieval
- Authors: Wenshen Xu, Mieradilijiang Maimaiti, Yuanhang Zheng, Xin Tang and Ji
Zhang
- Abstract summary: We introduce a joint training method by combining CL and Auto-MLM for self-supervised multi-lingual knowledge retrieval.
Experimental results show that our proposed approach consistently outperforms all the previous SOTA methods on both $&$ LAZADA service corpus and openly available corpora in 8 languages.
- Score: 7.73633850933515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) has become a ubiquitous approach for several
natural language processing (NLP) downstream tasks, especially for question
answering (QA). However, the major challenge, how to efficiently train the
knowledge retrieval model in an unsupervised manner, is still unresolved.
Recently the commonly used methods are composed of CL and masked language model
(MLM). Unexpectedly, MLM ignores the sentence-level training, and CL also
neglects extraction of the internal info from the query. To optimize the CL
hardly obtain internal information from the original query, we introduce a
joint training method by combining CL and Auto-MLM for self-supervised
multi-lingual knowledge retrieval. First, we acquire the fixed dimensional
sentence vector. Then, mask some words among the original sentences with random
strategy. Finally, we generate a new token representation for predicting the
masked tokens. Experimental results show that our proposed approach
consistently outperforms all the previous SOTA methods on both AliExpress $\&$
LAZADA service corpus and openly available corpora in 8 languages.
Related papers
- Code-mixed LLM: Improve Large Language Models' Capability to Handle Code-Mixing through Reinforcement Learning from AI Feedback [11.223762031003671]
Code-mixing introduces unique challenges in daily life, such as syntactic mismatches and semantic blending.
Large language models (LLMs) have revolutionized the field of natural language processing (NLP) by offering unprecedented capabilities in understanding human languages.
We propose to improve the multilingual LLMs' ability to understand code-mixing through reinforcement learning from human feedback (RLHF) and code-mixed machine translation tasks.
arXiv Detail & Related papers (2024-11-13T22:56:00Z) - A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models [39.35525969831397]
This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task.
Experiments on five public datasets demonstrate that our approach significantly improves LLM performance.
arXiv Detail & Related papers (2024-10-05T04:06:56Z) - Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in
Low-Resource Languages [0.0]
"prompting" is where a user provides a description of a task and some completed examples of the task to a PLM as context before prompting the PLM to perform the task on a new example.
We consider three methods: few-shot prompting (prompt), language-adaptive fine-tuning (LAFT), and neural machine translation (translate)
We find that translate and prompt settings are a compute-efficient and cost-effective method of few-shot prompting for the selected low-resource languages.
arXiv Detail & Related papers (2024-03-09T21:36:13Z) - Learning to Prompt with Text Only Supervision for Vision-Language Models [107.282881515667]
One branch of methods adapts CLIP by learning prompts using visual information.
An alternative approach resorts to training-free methods by generating class descriptions from large language models.
We propose to combine the strengths of both streams by learning prompts using only text data.
arXiv Detail & Related papers (2024-01-04T18:59:49Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - Automatic Smart Contract Comment Generation via Large Language Models
and In-Context Learning [11.52122354673779]
In this study, we propose an approach SCCLLM based on large language models (LLMs) and in-context learning.
Specifically, in the demonstration selection phase, SCCLLM retrieves the top-k code snippets from the historical corpus.
In the in-context learning phase, SCCLLM utilizes the retrieved code snippets as demonstrations.
arXiv Detail & Related papers (2023-11-17T08:31:09Z) - Modeling Sequential Sentence Relation to Improve Cross-lingual Dense
Retrieval [87.11836738011007]
We propose a multilingual multilingual language model called masked sentence model (MSM)
MSM consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.
To train the model, we propose a masked sentence prediction task, which masks and predicts the sentence vector via a hierarchical contrastive loss with sampled negatives.
arXiv Detail & Related papers (2023-02-03T09:54:27Z) - Bridging the Gap between Language Models and Cross-Lingual Sequence
Labeling [101.74165219364264]
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks.
Despite the great success, we draw an empirical observation that there is a training objective gap between pre-training and fine-tuning stages.
In this paper, we first design a pre-training task tailored for xSL named Cross-lingual Language Informative Span Masking (CLISM) to eliminate the objective gap.
Second, we present ContrAstive-Consistency Regularization (CACR), which utilizes contrastive learning to encourage the consistency between representations of input parallel
arXiv Detail & Related papers (2022-04-11T15:55:20Z) - Universal Sentence Representation Learning with Conditional Masked
Language Model [7.334766841801749]
We present Conditional Masked Language Modeling (M) to effectively learn sentence representations.
Our English CMLM model achieves state-of-the-art performance on SentEval.
As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains.
arXiv Detail & Related papers (2020-12-28T18:06:37Z) - Reusing a Pretrained Language Model on Languages with Limited Corpora
for Unsupervised NMT [129.99918589405675]
We present an effective approach that reuses an LM that is pretrained only on the high-resource language.
The monolingual LM is fine-tuned on both languages and is then used to initialize a UNMT model.
Our approach, RE-LM, outperforms a competitive cross-lingual pretraining model (XLM) in English-Macedonian (En-Mk) and English-Albanian (En-Sq)
arXiv Detail & Related papers (2020-09-16T11:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.