LLMs for Test Input Generation for Semantic Caches
- URL: http://arxiv.org/abs/2401.08138v1
- Date: Tue, 16 Jan 2024 06:16:33 GMT
- Title: LLMs for Test Input Generation for Semantic Caches
- Authors: Zafaryab Rasool, Scott Barnett, David Willie, Stefanus Kurniawan,
Sherwin Balugo, Srikanth Thudumu, Mohamed Abdelrazek
- Abstract summary: Large language models (LLMs) enable state-of-the-art semantic capabilities to be added to software systems.
At scale, the cost of serving thousands of users increases massively affecting also user experience.
We present VaryGen, an approach for using LLMs for test input generation that produces similar questions from unstructured text documents.
- Score: 1.8628177380024746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) enable state-of-the-art semantic capabilities to
be added to software systems such as semantic search of unstructured documents
and text generation. However, these models are computationally expensive. At
scale, the cost of serving thousands of users increases massively affecting
also user experience. To address this problem, semantic caches are used to
check for answers to similar queries (that may have been phrased differently)
without hitting the LLM service. Due to the nature of these semantic cache
techniques that rely on query embeddings, there is a high chance of errors
impacting user confidence in the system. Adopting semantic cache techniques
usually requires testing the effectiveness of a semantic cache (accurate cache
hits and misses) which requires a labelled test set of similar queries and
responses which is often unavailable. In this paper, we present VaryGen, an
approach for using LLMs for test input generation that produces similar
questions from unstructured text documents. Our novel approach uses the
reasoning capabilities of LLMs to 1) adapt queries to the domain, 2) synthesise
subtle variations to queries, and 3) evaluate the synthesised test dataset. We
evaluated our approach in the domain of a student question and answer system by
qualitatively analysing 100 generated queries and result pairs, and conducting
an empirical case study with an open source semantic cache. Our results show
that query pairs satisfy human expectations of similarity and our generated
data demonstrates failure cases of a semantic cache. Additionally, we also
evaluate our approach on Qasper dataset. This work is an important first step
into test input generation for semantic applications and presents
considerations for practitioners when calibrating a semantic cache.
Related papers
- Likelihood as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that likelihoods serve as an effective gauge for language model performance.
We propose two methods that use question likelihood as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - GPT Semantic Cache: Reducing LLM Costs and Latency via Semantic Embedding Caching [0.0]
GPT Semantic Cache is a method that leverages semantic caching of query embeddings in in-memory storage (Redis)
Our approach efficiently identifies semantically similar questions, allowing for the retrieval of pre-generated responses without redundant API calls to the Large Language Models.
This technique reduces operational costs and improves response times, enhancing the efficiency of LLM-powered applications.
arXiv Detail & Related papers (2024-11-08T02:21:19Z) - Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Synthetic Query Generation using Large Language Models for Virtual Assistants [7.446599238906526]
We explore the use of Large Language Models (LLMs) to generate synthetic queries that are complementary to template-based methods.
We find that LLMs generate more verbose queries, compared to template-based methods, and reference aspects specific to the entity.
arXiv Detail & Related papers (2024-06-10T18:50:57Z) - User Intent Recognition and Semantic Cache Optimization-Based Query Processing Framework using CFLIS and MGR-LAU [0.0]
This work analyzed the informational, navigational, and transactional-based intents in queries for enhanced QP.
For efficient QP, the data is structured using Epanechnikov Kernel-Ordering Points To Identify the Clustering Structure (EK-OPTICS)
The extracted features, detected intents and structured data are inputted to the Multi-head Gated Recurrent Learnable Attention Unit (MGR-LAU)
arXiv Detail & Related papers (2024-06-06T20:28:05Z) - QLSC: A Query Latent Semantic Calibrator for Robust Extractive Question Answering [32.436530949623155]
We propose a unique scaling strategy to capture latent semantic center features of queries.
These features are seamlessly integrated into traditional query and passage embeddings.
Our approach diminishes sensitivity to variations in text format and boosts the model's capability in pinpointing accurate answers.
arXiv Detail & Related papers (2024-04-30T07:34:42Z) - LIST: Learning to Index Spatio-Textual Data for Embedding based Spatial Keyword Queries [53.843367588870585]
List K-kNN spatial keyword queries (TkQs) return a list of objects based on a ranking function that considers both spatial and textual relevance.
There are two key challenges in building an effective and efficient index, i.e., the absence of high-quality labels and the unbalanced results.
We develop a novel pseudolabel generation technique to address the two challenges.
arXiv Detail & Related papers (2024-03-12T05:32:33Z) - MeanCache: User-Centric Semantic Cache for Large Language Model Based Web Services [8.350378532274405]
Caching is a natural solution to reduce inference costs on repeated queries.
This paper introduces MeanCache, a user-centric semantic cache for LLM-based services.
MeanCache identifies semantically similar queries to determine cache hit or miss.
arXiv Detail & Related papers (2024-03-05T06:23:50Z) - Temporal-aware Hierarchical Mask Classification for Video Semantic
Segmentation [62.275143240798236]
Video semantic segmentation dataset has limited categories per video.
Less than 10% of queries could be matched to receive meaningful gradient updates during VSS training.
Our method achieves state-of-the-art performance on the latest challenging VSS benchmark VSPW without bells and whistles.
arXiv Detail & Related papers (2023-09-14T20:31:06Z) - CAPSTONE: Curriculum Sampling for Dense Retrieval with Document
Expansion [68.19934563919192]
We propose a curriculum sampling strategy that utilizes pseudo queries during training and progressively enhances the relevance between the generated query and the real query.
Experimental results on both in-domain and out-of-domain datasets demonstrate that our approach outperforms previous dense retrieval models.
arXiv Detail & Related papers (2022-12-18T15:57:46Z) - UnifieR: A Unified Retriever for Large-Scale Retrieval [84.61239936314597]
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
Recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms.
We propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability.
arXiv Detail & Related papers (2022-05-23T11:01:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.