COGENT: A Curriculum-oriented Framework for Generating Grade-appropriate Educational Content
- URL: http://arxiv.org/abs/2506.09367v1
- Date: Wed, 11 Jun 2025 03:27:50 GMT
- Title: COGENT: A Curriculum-oriented Framework for Generating Grade-appropriate Educational Content
- Authors: Zhengyuan Liu, Stella Xin Yin, Dion Hoe-Lian Goh, Nancy F. Chen,
- Abstract summary: COGENT is a curriculum-oriented framework for generating grade-appropriate educational content.<n>We incorporate three curriculum components (science concepts, core ideas, and learning objectives)<n>We control readability through length, vocabulary, and sentence complexity.
- Score: 35.360208404408496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Generative AI has demonstrated strong potential and versatility in content generation, its application to educational contexts presents several challenges. Models often fail to align with curriculum standards and maintain grade-appropriate reading levels consistently. Furthermore, STEM education poses additional challenges in balancing scientific explanations with everyday language when introducing complex and abstract ideas and phenomena to younger students. In this work, we propose COGENT, a curriculum-oriented framework for generating grade-appropriate educational content. We incorporate three curriculum components (science concepts, core ideas, and learning objectives), control readability through length, vocabulary, and sentence complexity, and adopt a ``wonder-based'' approach to increase student engagement and interest. We conduct a multi-dimensional evaluation via both LLM-as-a-judge and human expert analysis. Experimental results show that COGENT consistently produces grade-appropriate passages that are comparable or superior to human references. Our work establishes a viable approach for scaling adaptive and high-quality learning resources.
Related papers
- Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [50.065744358362345]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - A Structured Unplugged Approach for Foundational AI Literacy in Primary Education [7.495145157323768]
We propose a structured teaching approach that fosters foundational AI literacy in primary students.<n>Our results indicate improvements in terminology understanding and usage, features description, logical reasoning, and evaluative skills.<n>The approach proved engaging, with students particularly enjoying activities that linked AI concepts to real-world reasoning.
arXiv Detail & Related papers (2025-05-27T16:23:57Z) - Stay Hungry, Stay Foolish: On the Extended Reading Articles Generation with LLMs [3.2962799070467432]
This research explores the potential of Large Language Models (LLMs) to streamline the creation of educational materials.<n>Using the TED-Ed Dig Deeper sections as an initial exploration, we investigate how supplementary articles can be enriched with contextual knowledge.<n> Experimental evaluations demonstrate that our model produces high-quality content and accurate course suggestions.
arXiv Detail & Related papers (2025-04-21T10:35:48Z) - Form-Substance Discrimination: Concept, Cognition, and Pedagogy [55.2480439325792]
This paper examines form-substance discrimination as an essential learning outcome for curriculum development in higher education.<n>We propose practical strategies for fostering this ability through curriculum design, assessment practices, and explicit instruction.
arXiv Detail & Related papers (2025-04-01T04:15:56Z) - Science Out of Its Ivory Tower: Improving Accessibility with Reinforcement Learning [15.03718014789799]
We introduce a reinforcement learning framework that fine-tunes a language model to rewrite scholarly abstracts into more comprehensible versions.<n>Our best model adjusts the readability level of scholarly abstracts by approximately six U.S. grade levels.<n>We envision this work as a step toward bridging the gap between scholarly research and the general public.
arXiv Detail & Related papers (2024-10-22T15:14:54Z) - Systematic Task Exploration with LLMs: A Study in Citation Text Generation [63.50597360948099]
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks.
We propose a three-component research framework that consists of systematic input manipulation, reference data, and output measurement.
We use this framework to explore citation text generation -- a popular scholarly NLP task that lacks consensus on the task definition and evaluation metric.
arXiv Detail & Related papers (2024-07-04T16:41:08Z) - Understanding the Progression of Educational Topics via Semantic Matching [0.9246281666115259]
Education systems are dynamically changing to accommodate technological advances, industrial and societal needs, and to enhance students' learning journeys.
Curriculum specialists and educators constantly revise taught subjects across educational grades to identify gaps, introduce new learning topics, and enhance the learning outcomes.
Having nuanced data about subjects, topics, and learning outcomes structured within a dataset, empowers us to leverage data science to better understand the progression of various learning topics.
arXiv Detail & Related papers (2024-02-10T08:24:29Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Heterogeneous Representation Learning: A Review [66.12816399765296]
Heterogeneous Representation Learning (HRL) brings some unique challenges.
We present a unified learning framework which is able to model most existing learning settings with the heterogeneous inputs.
We highlight the challenges that are less-touched in HRL and present future research directions.
arXiv Detail & Related papers (2020-04-28T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.