Using Large Language Models for (De-)Formalization and Natural Argumentation Exercises for Beginner's Students
- URL: http://arxiv.org/abs/2304.06186v3
- Date: Wed, 10 Apr 2024 14:19:26 GMT
- Title: Using Large Language Models for (De-)Formalization and Natural Argumentation Exercises for Beginner's Students
- Authors: Merlin Carl,
- Abstract summary: We describe two systems currently being developed that use large language models for the automatized correction of (i) exercises in translating back and forth between natural language and the languages of propositional logic and first-order predicate logic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe two systems currently being developed that use large language models for the automatized correction of (i) exercises in translating back and forth between natural language and the languages of propositional logic and first-order predicate logic and (ii) exercises in writing simple arguments in natural language in non-mathematical scenarios.
Related papers
- Improving Arithmetic Reasoning Ability of Large Language Models through Relation Tuples, Verification and Dynamic Feedback [14.938401898546553]
We propose to use a semi-structured form to represent reasoning steps of large language models.
Specifically, we use relations, which are not only human but also machine-friendly and easier to verify than natural language.
arXiv Detail & Related papers (2024-06-25T18:21:00Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - NLAS-multi: A Multilingual Corpus of Automatically Generated Natural
Language Argumentation Schemes [4.015890309289342]
We present an effective methodology for the automatic generation of natural language arguments in different topics and languages.
We also present a set of solid baselines and fine-tuned models for the automatic identification of argumentation schemes.
arXiv Detail & Related papers (2024-02-22T11:31:50Z) - Planning with Logical Graph-based Language Model for Instruction Generation [9.70880913062245]
We propose a graph-based language model, Logical-GLM, to infuse logic into language models.
We generate logical skeletons to guide language model training, infusing domain knowledge into language models.
Our approach can generate instructional texts with more correct logic owing to the internalized domain knowledge.
arXiv Detail & Related papers (2023-08-26T06:28:14Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Improving the Diproche CNL through Autoformalization via Large Language Models [0.0]
The Diproche system is an automated proof checker for texts written in a controlled fragment of German.
In this paper, we explore the possibility of prompting large language models for autoformalization in the context of Diproche.
arXiv Detail & Related papers (2023-03-12T20:11:25Z) - Benchmarking Language Models for Code Syntax Understanding [79.11525961219591]
Pre-trained language models have demonstrated impressive performance in both natural language processing and program understanding.
In this work, we perform the first thorough benchmarking of the state-of-the-art pre-trained models for identifying the syntactic structures of programs.
Our findings point out key limitations of existing pre-training methods for programming languages, and suggest the importance of modeling code syntactic structures.
arXiv Detail & Related papers (2022-10-26T04:47:18Z) - Formal Specifications from Natural Language [3.1806743741013657]
We study the ability of language models to translate natural language into formal specifications with complex semantics.
In particular, we fine-tune off-the-shelf language models on three datasets consisting of structured English sentences.
arXiv Detail & Related papers (2022-06-04T10:49:30Z) - Multilingual Generative Language Models for Zero-Shot Cross-Lingual
Event Argument Extraction [80.61458287741131]
We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE)
By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments.
Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage.
arXiv Detail & Related papers (2022-03-15T23:00:32Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.