End-to-End Open Vocabulary Keyword Search With Multilingual Neural
Representations
- URL: http://arxiv.org/abs/2308.08027v1
- Date: Tue, 15 Aug 2023 20:33:25 GMT
- Title: End-to-End Open Vocabulary Keyword Search With Multilingual Neural
Representations
- Authors: Bolaji Yusuf, Jan Cernocky, Murat Saraclar
- Abstract summary: We propose a neural ASR-free keyword search model which achieves competitive performance.
We extend this work with multilingual pretraining and detailed analysis of the model.
Our experiments show that the proposed multilingual training significantly improves the model performance.
- Score: 7.780766187171571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional keyword search systems operate on automatic speech recognition
(ASR) outputs, which causes them to have a complex indexing and search
pipeline. This has led to interest in ASR-free approaches to simplify the
search procedure. We recently proposed a neural ASR-free keyword search model
which achieves competitive performance while maintaining an efficient and
simplified pipeline, where queries and documents are encoded with a pair of
recurrent neural network encoders and the encodings are combined with a
dot-product. In this article, we extend this work with multilingual pretraining
and detailed analysis of the model. Our experiments show that the proposed
multilingual training significantly improves the model performance and that
despite not matching a strong ASR-based conventional keyword search system for
short queries and queries comprising in-vocabulary words, the proposed model
outperforms the ASR-based system for long queries and queries that do not
appear in the training data.
Related papers
- Retrieval-Enhanced Named Entity Recognition [1.2187048691454239]
RENER is a technique for named entity recognition using autoregressive language models based on In-Context Learning and information retrieval techniques.
Experimental results show that in the CrossNER collection we achieve state-of-the-art performance with the proposed technique.
arXiv Detail & Related papers (2024-10-17T01:12:48Z) - Hybrid Semantic Search: Unveiling User Intent Beyond Keywords [0.0]
This paper addresses the limitations of traditional keyword-based search in understanding user intent.
It introduces a novel hybrid search approach that leverages the strengths of non-semantic search engines, Large Language Models (LLMs), and embedding models.
arXiv Detail & Related papers (2024-08-17T16:04:31Z) - Improving Retrieval in Sponsored Search by Leveraging Query Context Signals [6.152499434499752]
We propose an approach to enhance query understanding by augmenting queries with rich contextual signals.
We use web search titles and snippets to ground queries in real-world information and utilize GPT-4 to generate query rewrites and explanations.
Our context-aware approach substantially outperforms context-free models.
arXiv Detail & Related papers (2024-07-19T14:28:53Z) - Large Search Model: Redefining Search Stack in the Era of LLMs [63.503320030117145]
We introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM)
All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts.
This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simultaneously simplifying the existing cumbersome search stack.
arXiv Detail & Related papers (2023-10-23T05:52:09Z) - Improved Contextual Recognition In Automatic Speech Recognition Systems
By Semantic Lattice Rescoring [4.819085609772069]
We propose a novel approach for enhancing contextual recognition within ASR systems via semantic lattice processing.
Our solution consists of using Hidden Markov Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks (DNN) models for better accuracy.
We demonstrate the effectiveness of our proposed framework on the LibriSpeech dataset with empirical analyses.
arXiv Detail & Related papers (2023-10-14T23:16:05Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - KRLS: Improving End-to-End Response Generation in Task Oriented Dialog
with Reinforced Keywords Learning [25.421649004269373]
In task-oriented dialogs (TOD), reinforcement learning algorithms train a model to directly optimize response for task-related metrics.
We investigate an approach to create a more efficient RL-based algorithm to improve TOD performance in an offline setting.
Experiments on the MultiWoZ dataset show our new training algorithm, Keywords Reinforcement Learning with Next-word Sampling (KRLS), achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-30T06:27:46Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.