Logical Modelling in CS Education: Bridging the Natural Language Gap
- URL: http://arxiv.org/abs/2504.21384v1
- Date: Wed, 30 Apr 2025 07:34:41 GMT
- Title: Logical Modelling in CS Education: Bridging the Natural Language Gap
- Authors: Tristan Kneisel, Fabian Vehlken, Thomas Zeume,
- Abstract summary: An important learning objective for computer science students is to learn how to formalize descriptions of real world scenarios.<n>We propose a conceptual framework for educational tasks where students choose a vocabulary.<n>We implement educational tasks for designing propositional and first-order vocabularies within the Iltis educational system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An important learning objective for computer science students is to learn how to formalize descriptions of real world scenarios in order to subsequently solve real world challenges using methods and algorithms from formal foundations of computer science. Two key steps when formalizing with logical formalisms are to (a) choose a suitable vocabulary, that is, e.g., which propositional variables or first-order symbols to use, and with which intended meaning, and then to (b) construct actual formal descriptions, i.e. logical formulas over the chosen vocabulary. While (b) is addressed by several educational support systems for formal foundations of computer science, (a) is so far not addressed at all -- likely because it involves specifying the intended meaning of symbols in natural language. We propose a conceptual framework for educational tasks where students choose a vocabulary, including an enriched language for describing solution spaces as well as an NLP-approach for checking student attempts and providing feedback. We implement educational tasks for designing propositional and first-order vocabularies within the Iltis educational system, and report on experiments with data from introductory logic courses for computer science students with > 25.000 data points.
Related papers
- From Prompts to Propositions: A Logic-Based Lens on Student-LLM Interactions [9.032718302451501]
We introduce Prompt2Constraints, a novel method that translates students prompts into logical constraints.
We use this approach to analyze a dataset of 1,872 prompts from 203 students solving programming tasks.
We find that while successful and unsuccessful attempts tend to use a similar number of constraints overall, when students fail, they often modify their prompts more significantly.
arXiv Detail & Related papers (2025-04-25T20:58:16Z) - LogicLearner: A Tool for the Guided Practice of Propositional Logic Proofs [2.649019945607464]
We develop LogicLearner, a web application for guided logic proof practice.<n> LogicLearner consists of an interface to attempt logic proofs step-by-step and an automated proof solver to generate solutions on the fly.
arXiv Detail & Related papers (2025-03-25T02:23:08Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Data2Concept2Text: An Explainable Multilingual Framework for Data Analysis Narration [42.95840730800478]
This paper presents a complete explainable system that interprets a set of data, abstracts the underlying features and describes them in a natural language of choice.<n>The system relies on two crucial stages: (i) identifying emerging properties from data and transforming them into abstract concepts, and (ii) converting these concepts into natural language.
arXiv Detail & Related papers (2025-02-13T11:49:48Z) - BoolQuestions: Does Dense Retrieval Understand Boolean Logic in Language? [88.29075896295357]
We first investigate whether current retrieval systems can comprehend the Boolean logic implied in language.
Through extensive experimental results, we draw the conclusion that current dense retrieval systems do not fully understand Boolean logic in language.
We propose a contrastive continual training method that serves as a strong baseline for the research community.
arXiv Detail & Related papers (2024-11-19T05:19:53Z) - Exploring Error Types in Formal Languages Among Students of Upper Secondary Education [0.0]
We report on an exploratory study of errors in formal languages among upper secondary education students.
Our results suggest instances of non-functional understanding of concepts.
These findings can serve as a starting point for a broader understanding of how and why students struggle with this topic.
arXiv Detail & Related papers (2024-09-23T14:16:13Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.