Automatic question generation for propositional logical equivalences
- URL: http://arxiv.org/abs/2405.05513v1
- Date: Thu, 9 May 2024 02:44:42 GMT
- Title: Automatic question generation for propositional logical equivalences
- Authors: Yicheng Yang, Xinyu Wang, Haoming Yu, Zhiyuan Li,
- Abstract summary: We develop and implement a method capable of generating tailored questions for each student.
Previous studies have investigated AQG frameworks in education, which include validity, user-defined difficulty, and personalized problem generation.
Our new AQG approach produces logical equivalence problems for Discrete Mathematics, which is a core course for year-one computer science students.
- Score: 6.221146613622175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increase in academic dishonesty cases among college students has raised concern, particularly due to the shift towards online learning caused by the pandemic. We aim to develop and implement a method capable of generating tailored questions for each student. The use of Automatic Question Generation (AQG) is a possible solution. Previous studies have investigated AQG frameworks in education, which include validity, user-defined difficulty, and personalized problem generation. Our new AQG approach produces logical equivalence problems for Discrete Mathematics, which is a core course for year-one computer science students. This approach utilizes a syntactic grammar and a semantic attribute system through top-down parsing and syntax tree transformations. Our experiments show that the difficulty level of questions generated by our AQG approach is similar to the questions presented to students in the textbook [1]. These results confirm the practicality of our AQG approach for automated question generation in education, with the potential to significantly enhance learning experiences.
Related papers
- How Teachers Can Use Large Language Models and Bloom's Taxonomy to
Create Educational Quizzes [5.487297537295827]
This paper applies a large language model-based QG approach where questions are generated with learning goals derived from Bloom's taxonomy.
The results demonstrate that teachers prefer to write quizzes with automatically generated questions, and that such quizzes have no loss in quality compared to handwritten versions.
arXiv Detail & Related papers (2024-01-11T13:47:13Z) - Retrieval-augmented Generation to Improve Math Question-Answering:
Trade-offs Between Groundedness and Human Preference [0.0]
We design prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions.
We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey.
We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.
arXiv Detail & Related papers (2023-10-04T22:09:28Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Automatic Generation of Socratic Subquestions for Teaching Math Word
Problems [16.97827669744673]
We explore the ability of large language models (LMs) in generating sequential questions for guiding math word problem-solving.
On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions.
Results suggest that the difficulty level of problems plays an important role in determining whether questioning improves or hinders human performance.
arXiv Detail & Related papers (2022-11-23T10:40:22Z) - Deep Algorithmic Question Answering: Towards a Compositionally Hybrid AI
for Algorithmic Reasoning [0.0]
We argue that the challenge of algorithmic reasoning in question answering can be effectively tackled with a "systems" approach to AI.
We propose an approach to algorithm reasoning for QA, Deep Algorithmic Question Answering.
arXiv Detail & Related papers (2021-09-16T14:28:18Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - GeoQA: A Geometric Question Answering Benchmark Towards Multimodal
Numerical Reasoning [172.36214872466707]
We focus on solving geometric problems, which requires a comprehensive understanding of textual descriptions, visual diagrams, and theorem knowledge.
We propose a Geometric Question Answering dataset GeoQA, containing 5,010 geometric problems with corresponding annotated programs.
arXiv Detail & Related papers (2021-05-30T12:34:17Z) - Guiding the Growth: Difficulty-Controllable Question Generation through
Step-by-Step Rewriting [30.722526598633912]
We argue that Question Generation (QG) systems should have stronger control over the logic of generated questions.
We propose a novel framework that progressively increases question difficulty through step-by-step rewriting.
arXiv Detail & Related papers (2021-05-25T06:43:13Z) - Logically Consistent Loss for Visual Question Answering [66.83963844316561]
The current advancement in neural-network based Visual Question Answering (VQA) cannot ensure such consistency due to identically distribution (i.i.d.) assumption.
We propose a new model-agnostic logic constraint to tackle this issue by formulating a logically consistent loss in the multi-task learning framework.
Experiments confirm that the proposed loss formulae and introduction of hybrid-batch leads to more consistency as well as better performance.
arXiv Detail & Related papers (2020-11-19T20:31:05Z) - Reinforced Multi-task Approach for Multi-hop Question Generation [47.15108724294234]
We take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context.
We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator.
We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA.
arXiv Detail & Related papers (2020-04-05T10:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.