Enhancing Answer Selection in Community Question Answering with
Pre-trained and Large Language Models
- URL: http://arxiv.org/abs/2311.17502v1
- Date: Wed, 29 Nov 2023 10:24:50 GMT
- Title: Enhancing Answer Selection in Community Question Answering with
Pre-trained and Large Language Models
- Authors: Xinghang Hu
- Abstract summary: We first propose the Question-Answer cross attention networks (QAN) with pre-trained models for answer selection.
We then utilize large language model (LLM) to perform answer selection with knowledge augmentation.
Experiments show that the QAN model state-of-the-art performance on two datasets, SemEval2015 and SemEval 2017.
- Score: 0.9065034043031668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Community Question Answering (CQA) becomes increasingly prevalent in recent
years. However, there are a large number of answers, which is difficult for
users to select the relevant answers. Therefore, answer selection is a very
significant subtask of CQA. In this paper, we first propose the Question-Answer
cross attention networks (QAN) with pre-trained models for answer selection and
utilize large language model (LLM) to perform answer selection with knowledge
augmentation. Specifically, we apply the BERT model as the encoder layer to do
pre-training for question subjects, question bodies and answers, respectively,
then the cross attention mechanism selects the most relevant answer for
different questions. Experiments show that the QAN model achieves
state-of-the-art performance on two datasets, SemEval2015 and SemEval2017.
Moreover, we use the LLM to generate external knowledge from questions and
correct answers to achieve knowledge augmentation for the answer selection task
by LLM, while optimizing the prompt of LLM in different aspects. The results
show that the introduction of external knowledge can improve the correct answer
selection rate of LLM on datasets SemEval2015 and SemEval2017. Meanwhile, LLM
can also select the correct answer on more questions by optimized prompt.
Related papers
- Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter [17.736962215696366]
We introduce single-round instance-level prompt optimization, referred to as question rewriter.
By enhancing the intelligibility of human questions for black-box LLMs, our question rewriter improves the quality of generated answers.
arXiv Detail & Related papers (2024-08-20T06:24:47Z) - Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - Multi-LLM QA with Embodied Exploration [55.581423861790945]
We investigate the use of Multi-Embodied LLM Explorers (MELE) for question-answering in an unknown environment.
Multiple LLM-based agents independently explore and then answer queries about a household environment.
We analyze different aggregation methods to generate a single, final answer for each query.
arXiv Detail & Related papers (2024-06-16T12:46:40Z) - Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - UnibucLLM: Harnessing LLMs for Automated Prediction of Item Difficulty and Response Time for Multiple-Choice Questions [25.877058354902953]
This work explores a novel data augmentation method based on Large Language Models (LLMs) for predicting item difficulty and response time of retired USMLE Multiple-Choice Questions (MCQs) in the BEA 2024 Shared Task.
Our approach is based on augmenting the dataset with answers from zero-shot LLMs and employing transformer-based models based on six alternative feature combinations.
arXiv Detail & Related papers (2024-04-20T10:41:02Z) - Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering [55.295699268654545]
We propose a novel Chain-of-Discussion framework to leverage the synergy among open-source Large Language Models.
Our experiments show that discussions among multiple LLMs play a vital role in enhancing the quality of answers.
arXiv Detail & Related papers (2024-02-26T05:31:34Z) - Improving Zero-shot Visual Question Answering via Large Language Models
with Reasoning Question Prompts [22.669502403623166]
We present Reasoning Question Prompts for VQA tasks, which can further activate the potential of Large Language Models.
We generate self-contained questions as reasoning question prompts via an unsupervised question edition module.
Each reasoning question prompt clearly indicates the intent of the original question.
Then, the candidate answers associated with their confidence scores acting as answer integritys are fed into LLMs.
arXiv Detail & Related papers (2023-11-15T15:40:46Z) - Leveraging Large Language Models for Multiple Choice Question Answering [6.198523595657983]
We show that a model with high MCSB ability performs much better with the natural approach than with the traditional approach.
We show that a model with high MCSB ability performs much better with the natural approach than with the traditional approach.
arXiv Detail & Related papers (2022-10-22T05:04:54Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.