Simple or Complex? Complexity-Controllable Question Generation with Soft
Templates and Deep Mixture of Experts Model
- URL: http://arxiv.org/abs/2110.06560v1
- Date: Wed, 13 Oct 2021 08:16:52 GMT
- Title: Simple or Complex? Complexity-Controllable Question Generation with Soft
Templates and Deep Mixture of Experts Model
- Authors: Sheng Bi and Xiya Cheng and Yuan-Fang Li and Lizhen Qu and Shirong
Shen and Guilin Qi and Lu Pan and Yinlin Jiang
- Abstract summary: We propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates.
Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question.
The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation.
- Score: 15.411214563867548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to generate natural-language questions with controlled complexity
levels is highly desirable as it further expands the applicability of question
generation. In this paper, we propose an end-to-end neural
complexity-controllable question generation model, which incorporates a mixture
of experts (MoE) as the selector of soft templates to improve the accuracy of
complexity control and the quality of generated questions. The soft templates
capture question similarity while avoiding the expensive construction of actual
templates. Our method introduces a novel, cross-domain complexity estimator to
assess the complexity of a question, taking into account the passage, the
question, the answer and their interactions. The experimental results on two
benchmark QA datasets demonstrate that our QG model is superior to
state-of-the-art methods in both automatic and manual evaluation. Moreover, our
complexity estimator is significantly more accurate than the baselines in both
in-domain and out-domain settings.
Related papers
- Understanding Complexity in VideoQA via Visual Program Generation [31.207902042321006]
We propose a data-driven approach to analyzing query complexity in Video Question Answering (VideoQA)<n>We experimentally show that humans struggle to predict which questions are difficult for machine learning models.<n>We extend it to automatically generate complex questions, constructing a new benchmark that is 1.9 times harder than the popular NExT-QA.
arXiv Detail & Related papers (2025-05-19T17:55:14Z) - Unveiling Hybrid Cyclomatic Complexity: A Comprehensive Analysis and Evaluation as an Integral Feature in Automatic Defect Prediction Models [0.5461938536945723]
This paper aims to analyse a novel complexity metric, Hybrid Cyclomatic Complexity (HCC) and its efficiency as a feature in a defect prediction model.
We will present a comparative study between the HCC metric and its two components, the inherited complexity and the actual complexity of a class in the object-oriented context.
arXiv Detail & Related papers (2025-04-01T07:07:17Z) - Flow-Lenia.png: Evolving Multi-Scale Complexity by Means of Compression [0.0]
We propose a fitness measure quantifying multi-scale complexity for cellular automaton states.
The use of compressibility is grounded in the concept of Kolmogorov complexity, which defines the complexity of an object by the size of its smallest representation.
arXiv Detail & Related papers (2024-08-08T04:13:17Z) - Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent [50.508669199496474]
We develop a ReAct-style LLM agent with the ability to reason and act upon external knowledge.
We refine the agent through a ReST-like method that iteratively trains on previous trajectories.
Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model.
arXiv Detail & Related papers (2023-12-15T18:20:15Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - On the Benefits of Leveraging Structural Information in Planning Over
the Learned Model [3.3512508970931236]
We investigate the benefits of leveraging structural information about the system in terms of reducing sample complexity.
Our analysis shows that there can be a significant saving in sample complexity by leveraging structural information about the model.
arXiv Detail & Related papers (2023-03-15T18:18:01Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Quantum Parameterized Complexity [1.01129133945787]
We introduce the quantum analogues of a range of parameterized complexity classes.
This framework exposes a rich classification of the complexity of parameterized versions of QMA-hard problems.
arXiv Detail & Related papers (2022-03-15T15:34:38Z) - Robust Question Answering Through Sub-part Alignment [53.94003466761305]
We model question answering as an alignment problem.
We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
arXiv Detail & Related papers (2020-04-30T09:10:57Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.