Improving Domain-Specific Retrieval by NLI Fine-Tuning
- URL: http://arxiv.org/abs/2308.03103v1
- Date: Sun, 6 Aug 2023 12:40:58 GMT
- Title: Improving Domain-Specific Retrieval by NLI Fine-Tuning
- Authors: Roman Du\v{s}ek, Aleksander Wawer, Christopher Galias, Lidia
Wojciechowska
- Abstract summary: This article investigates the fine-tuning potential of natural language inference (NLI) data to improve information retrieval and ranking.
We employ both monolingual and multilingual sentence encoders fine-tuned by a supervised method utilizing contrastive loss and NLI data.
Our results point to the fact that NLI fine-tuning increases the performance of the models in both tasks and both languages, with the potential to improve mono- and multilingual models.
- Score: 64.79760042717822
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The aim of this article is to investigate the fine-tuning potential of
natural language inference (NLI) data to improve information retrieval and
ranking. We demonstrate this for both English and Polish languages, using data
from one of the largest Polish e-commerce sites and selected open-domain
datasets. We employ both monolingual and multilingual sentence encoders
fine-tuned by a supervised method utilizing contrastive loss and NLI data. Our
results point to the fact that NLI fine-tuning increases the performance of the
models in both tasks and both languages, with the potential to improve mono-
and multilingual models. Finally, we investigate uniformity and alignment of
the embeddings to explain the effect of NLI-based fine-tuning for an
out-of-domain use-case.
Related papers
- sPhinX: Sample Efficient Multilingual Instruction Fine-Tuning Through N-shot Guided Prompting [29.63634707674839]
We introduce a novel recipe for creating a multilingual synthetic instruction tuning dataset, sPhinX.
sPhinX is created by selectively translating instruction response pairs from English into 50 languages.
We test the effectiveness of sPhinx by using it to fine-tune two state-of-the-art models, Mistral-7B and Phi-Small.
arXiv Detail & Related papers (2024-07-13T13:03:45Z) - Improving Multilingual Instruction Finetuning via Linguistically Natural and Diverse Datasets [38.867815476721894]
Most Instruction Fine-Tuning (IFT) datasets are predominantly in English, limiting model performance in other languages.
Traditional methods for creating multilingual IFT datasets struggle to capture linguistic nuances and ensure prompt (instruction) diversity.
We propose a novel method for collecting multilingual IFT datasets that preserves linguistic naturalness and ensures prompt diversity.
arXiv Detail & Related papers (2024-07-01T23:47:09Z) - MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference [65.37685198688538]
This paper presents MSciNLI, a dataset containing 132,320 sentence pairs extracted from five new scientific domains.
We establish strong baselines on MSciNLI by fine-tuning Pre-trained Language Models (PLMs) and prompting Large Language Models (LLMs)
We show that domain shift degrades the performance of scientific NLI models which demonstrates the diverse characteristics of different domains in our dataset.
arXiv Detail & Related papers (2024-04-11T18:12:12Z) - Cross-lingual Transfer or Machine Translation? On Data Augmentation for
Monolingual Semantic Textual Similarity [2.422759879602353]
Cross-lingual transfer of Wikipedia data exhibits improved performance for monolingual STS.
We find a superiority of the Wikipedia domain over the NLI domain for these languages, in contrast to prior studies that focused on NLI as training data.
arXiv Detail & Related papers (2024-03-08T12:28:15Z) - Improving Polish to English Neural Machine Translation with Transfer
Learning: Effects of Data Volume and Language Similarity [2.4674086273775035]
We investigate the impact of data volume and the use of similar languages on transfer learning in a machine translation task.
We fine-tune mBART model for a Polish-English translation task using the OPUS-100 dataset.
Our experiments show that a combination of related languages and larger amounts of data outperforms the model trained on related languages or larger amounts of data alone.
arXiv Detail & Related papers (2023-06-01T13:34:21Z) - WANLI: Worker and AI Collaboration for Natural Language Inference
Dataset Creation [101.00109827301235]
We introduce a novel paradigm for dataset creation based on human and machine collaboration.
We use dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instruct GPT-3 to compose new examples with similar patterns.
The resulting dataset, WANLI, consists of 108,357 natural language inference (NLI) examples that present unique empirical strengths.
arXiv Detail & Related papers (2022-01-16T03:13:49Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Mixed-Lingual Pre-training for Cross-lingual Summarization [54.4823498438831]
Cross-lingual Summarization aims at producing a summary in the target language for an article in the source language.
We propose a solution based on mixed-lingual pre-training that leverages both cross-lingual tasks like translation and monolingual tasks like masked language models.
Our model achieves an improvement of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over state-of-the-art results.
arXiv Detail & Related papers (2020-10-18T00:21:53Z) - OCNLI: Original Chinese Natural Language Inference [21.540733910984006]
We present the first large-scale NLI dataset (consisting of 56,000 annotated sentence pairs) for Chinese called the Original Chinese Natural Language Inference dataset (OCNLI)
Unlike recent attempts at extending NLI to other languages, our dataset does not rely on any automatic translation or non-expert annotation.
We establish several baseline results on our dataset using state-of-the-art pre-trained models for Chinese, and find even the best performing models to be far outpaced by human performance.
arXiv Detail & Related papers (2020-10-12T04:25:48Z) - Dynamic Data Selection and Weighting for Iterative Back-Translation [116.14378571769045]
We propose a curriculum learning strategy for iterative back-translation models.
We evaluate our models on domain adaptation, low-resource, and high-resource MT settings.
Experimental results demonstrate that our methods achieve improvements of up to 1.8 BLEU points over competitive baselines.
arXiv Detail & Related papers (2020-04-07T19:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.