UKP-SQUARE: An Online Platform for Question Answering Research
- URL: http://arxiv.org/abs/2203.13693v2
- Date: Mon, 28 Mar 2022 16:14:15 GMT
- Title: UKP-SQUARE: An Online Platform for Question Answering Research
- Authors: Tim Baumg\"artner, Kexin Wang, Rachneet Sachdeva, Max Eichler, Gregor
Geigle, Clifton Poth, Hannah Sterz, Haritz Puerto, Leonardo F. R. Ribeiro,
Jonas Pfeiffer, Nils Reimers, G\"ozde G\"ul \c{S}ahin, Iryna Gurevych
- Abstract summary: We present UKP-SQUARE, an online QA platform for researchers which allows users to query and analyze a large collection of modern Skills.
UKP-SQUARE allows users to query and analyze a large collection of modern Skills via a user-friendly web interface and integrated tests.
- Score: 50.35348764297317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in NLP and information retrieval have given rise to a diverse
set of question answering tasks that are of different formats (e.g.,
extractive, abstractive), require different model architectures (e.g.,
generative, discriminative), and setups (e.g., with or without retrieval).
Despite having a large number of powerful, specialized QA pipelines (which we
refer to as Skills) that consider a single domain, model or setup, there exists
no framework where users can easily explore and compare such pipelines and can
extend them according to their needs. To address this issue, we present
UKP-SQUARE, an extensible online QA platform for researchers which allows users
to query and analyze a large collection of modern Skills via a user-friendly
web interface and integrated behavioural tests. In addition, QA researchers can
develop, manage, and share their custom Skills using our microservices that
support a wide range of models (Transformers, Adapters, ONNX), datastores and
retrieval techniques (e.g., sparse and dense). UKP-SQUARE is available on
https://square.ukp-lab.de.
Related papers
- Multi-LLM QA with Embodied Exploration [55.581423861790945]
We investigate the use of Multi-Embodied LLM Explorers (MELE) for question-answering in an unknown environment.
Multiple LLM-based agents independently explore and then answer queries about a household environment.
We analyze different aggregation methods to generate a single, final answer for each query.
arXiv Detail & Related papers (2024-06-16T12:46:40Z) - LocalRQA: From Generating Data to Locally Training, Testing, and
Deploying Retrieval-Augmented QA Systems [22.90963783300522]
LocalRQA is an open-source toolkit that lets researchers and developers customize the model training, testing, and deployment process.
We build systems using online documentation obtained from Databricks and Faire's websites.
We find 7B-models trained and deployed using LocalRQA reach a similar performance compared to using OpenAI's text-ada and GPT-4 QAT-4.
arXiv Detail & Related papers (2024-03-01T21:10:20Z) - A Practical Toolkit for Multilingual Question and Answer Generation [79.31199020420827]
We introduce AutoQG, an online service for multilingual QAG, along with lmqg, an all-in-one Python package for model fine-tuning, generation, and evaluation.
We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg.
arXiv Detail & Related papers (2023-05-27T08:42:37Z) - Chain-of-Skills: A Configurable Model for Open-domain Question Answering [79.8644260578301]
The retrieval model is an indispensable component for real-world knowledge-intensive tasks.
Recent work focuses on customized methods, limiting the model transferability and scalability.
We propose a modular retriever where individual modules correspond to key skills that can be reused across datasets.
arXiv Detail & Related papers (2023-05-04T20:19:39Z) - UKP-SQuARE v3: A Platform for Multi-Agent QA Research [48.92308487624824]
We extend UKP-SQuARE, an online platform for Question Answering (QA) research, to support three families of multi-agent systems.
We conduct experiments to evaluate their inference speed and discuss the performance vs. speed trade-off compared to multi-dataset models.
arXiv Detail & Related papers (2023-03-31T15:07:36Z) - PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question
Answering Research and Development [24.022050096797606]
PRIMEQA is a one-stop QA repository with an aim to democratize QA re-search and facilitate easy replication of state-of-the-art (SOTA) QA methods.
It supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation.
It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on pub-lic benchmarks, and expanding pre-existing methods.
arXiv Detail & Related papers (2023-01-23T20:43:26Z) - MetaQA: Combining Expert Agents for Multi-Skill Question Answering [49.35261724460689]
We argue that despite the promising results of multi-dataset models, some domains or QA formats might require specific architectures.
We propose to combine expert agents with a novel, flexible, and training-efficient architecture that considers questions, answer predictions, and answer-prediction confidence scores.
arXiv Detail & Related papers (2021-12-03T14:05:52Z) - Building a Legal Dialogue System: Development Process, Challenges and
Opportunities [1.433758865948252]
This paper presents key principles and solutions to the challenges faced in designing a domain-specific conversational agent for the legal domain.
It provides functionality in answering user queries and recording user information including contact details and case-related information.
arXiv Detail & Related papers (2021-09-01T13:35:42Z) - NeuralQA: A Usable Library for Question Answering (Contextual Query
Expansion + BERT) on Large Datasets [0.6091702876917281]
NeuralQA is a library for Question Answering (QA) on large datasets.
It integrates with existing infrastructure (e.g., ElasticSearch instances and reader models trained with the HuggingFace Transformers API) and offers helpful defaults for QA subtasks.
Code and documentation for NeuralQA is available as open source on Github.
arXiv Detail & Related papers (2020-07-30T03:38:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.