FLARE v2: A Recursive Framework for Program Comprehension Across Common Teaching Languages and Levels of Abstraction
- URL: http://arxiv.org/abs/2512.09261v2
- Date: Mon, 15 Dec 2025 19:12:27 GMT
- Title: FLARE v2: A Recursive Framework for Program Comprehension Across Common Teaching Languages and Levels of Abstraction
- Authors: Justin Heath,
- Abstract summary: FLARE v2 is an account of how program meaning can be described across abstraction scales in common teaching languages.<n>It reframes FLARE v1's tiers as one cycle: identify bounded elements (Receives, Sends, Effects, Shares), analyse bindings along two dimensions (Causal-Temporal and Communicative), and treat the bound set as a new element at the next scale.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Building on the classroom framework in Heath et al. (2025), this paper proposes FLARE v2 as a recursive, semiotically informed account of how program meaning can be described across abstraction scales in common teaching languages. It reframes FLARE v1's tiers as one cycle: identify bounded elements (Receives, Sends, Effects, Shares), analyse bindings along two dimensions (Causal-Temporal and Communicative), and treat the bound set as a new element at the next scale. Causal-Temporal binding has three subtypes - Sequential, Branch, and Event - to distinguish user-authored control flow from event-driven control whose dispatch is hidden in the runtime. A Compositional Ladder visualises the same compositional move from blocks and statements through segments and systems. FLARE v2 is scoped to imperative and event-driven environments typical of primary and lower-secondary curricula. Above the system layer, behaviour is increasingly shaped by interaction between code and operating context (scheduling, infrastructure, permissions, contracts, failures, platform policy). Here, the element-and-binding vocabulary remains a structural baseline, but continuity of explanation typically requires overlays that make environmental constraints explicit. Event binding and overlays serve a common pedagogical role - preserving coherent structural reasoning where key causal mechanisms are not fully visible in the authored artefact. OOP design reasoning, explicit concurrency models, distributed systems, and functional paradigms are treated as future extensions; implementation and evaluation are left for future work.
Related papers
- Multi-Agent Procedural Graph Extraction with Structural and Logical Refinement [66.51979814832332]
model formulates procedural graph extraction as a multi-round reasoning process with dedicated structural and logical refinement.<n>Experiments demonstrate that model achieves substantial improvements in both structural correctness and logical consistency over strong baselines.
arXiv Detail & Related papers (2026-01-27T04:00:48Z) - LSRIF: Logic-Structured Reinforcement Learning for Instruction Following [56.517329105764475]
We propose a logic-structured training framework LSRIF that explicitly models instruction logic.<n> Experiments show LSRIF brings significant improvements in instruction-following and general reasoning.
arXiv Detail & Related papers (2026-01-10T05:11:38Z) - Event Extraction in Large Language Model [99.94321497574805]
We argue that EE should be viewed as a system component that provides a cognitive scaffold for LLM centered solutions.<n>This survey covers EE in text and multimodal settings, organizing tasks and taxonomy, tracing method evolution from rule based and neural models to instruction driven and generative frameworks.
arXiv Detail & Related papers (2025-12-22T16:22:14Z) - Systematically Thinking about the Complexity of Code Structuring Exercises at Introductory Level [1.2207782754338876]
We introduce a framework for assessing the complexity of code structuring tasks.<n>Students must identify and separate meaningful abstractions within existing, unstructured code.<n>The framework is designed to support the development of educational tasks that build students' skills with DA.
arXiv Detail & Related papers (2025-12-05T21:57:33Z) - Prompt Decorators: A Declarative and Composable Syntax for Reasoning, Formatting, and Control in LLMs [0.0]
This paper introduces Prompt Decorators, a declarative, composable syntax that governs behavior through compact control tokens.<n>Each decorator modifies a behavioral dimension, such as verbose reasoning style, structure, or tone, without changing task content.<n>It defines a unified syntax, scoping model, and deterministic processing pipeline enabling predictable and auditable behavior composition.
arXiv Detail & Related papers (2025-10-21T17:35:49Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Talk in Pieces, See in Whole: Disentangling and Hierarchical Aggregating Representations for Language-based Object Detection [39.748035737067745]
We propose restructuring linguistic representations according to the hierarchical relations within sentences for language-based object detection.<n>A key insight is the necessity of disentangling textual tokens into core components-objects, attributes, and relations ("talk in pieces")-and subsequently aggregating them into hierarchically structured sentence-level representations.<n> Experimental results under the OmniLabel benchmark show a 24% performance improvement, demonstrating the importance of linguistic compositionality.
arXiv Detail & Related papers (2025-09-29T02:14:26Z) - Data Dependency-Aware Code Generation from Enhanced UML Sequence Diagrams [54.528185120850274]
We propose a novel step-by-step code generation framework named API2Dep.<n>First, we introduce an enhanced Unified Modeling Language (UML) API diagram tailored for service-oriented architectures.<n>Second, recognizing the critical role of data flow, we introduce a dedicated data dependency inference task.
arXiv Detail & Related papers (2025-08-05T12:28:23Z) - $I^2G$: Generating Instructional Illustrations via Text-Conditioned Diffusion [31.2362624526101]
We propose a language-driven framework that decomposing procedural text into coherent visual instructions.<n>Our approach models the linguistic structure of instructional content by coherence it into goal statements and sequential steps, then conditioning visual generation on these linguistic elements.<n>This work contributes to the growing body of research on grounding procedural language in visual content, with applications spanning education, task guidance, and multimodal language understanding.
arXiv Detail & Related papers (2025-05-22T09:10:09Z) - Data2Concept2Text: An Explainable Multilingual Framework for Data Analysis Narration [42.95840730800478]
This paper presents a complete explainable system that interprets a set of data, abstracts the underlying features and describes them in a natural language of choice.<n>The system relies on two crucial stages: (i) identifying emerging properties from data and transforming them into abstract concepts, and (ii) converting these concepts into natural language.
arXiv Detail & Related papers (2025-02-13T11:49:48Z) - Seg2Act: Global Context-aware Action Generation for Document Logical Structuring [45.55145491566147]
We introduce Seg2Act, an end-to-end, generation-based method for document logical structuring.
Seg2Act iteratively generates the action sequence via a global context-aware generative model, and simultaneously updates its global context and current logical structure.
Experiments on ChCatExt and HierDoc datasets demonstrate the superior performance of Seg2Act in both supervised and transfer learning settings.
arXiv Detail & Related papers (2024-10-09T11:58:40Z) - Unsupervised Mutual Learning of Discourse Parsing and Topic Segmentation in Dialogue [37.618612723025784]
In dialogue systems, discourse plays a crucial role in managing conversational focus and coordinating interactions.<n>It consists of two key structures: rhetorical structure and topic structure.<n>We introduce a unified representation that integrates rhetorical and topic structures, ensuring semantic consistency between them.<n>We propose an unsupervised mutual learning framework (UMLF) that jointly models rhetorical and topic structures, allowing them to mutually reinforce each other without requiring additional annotations.
arXiv Detail & Related papers (2024-05-30T08:10:50Z) - Learning Planning Abstractions from Language [28.855381137615275]
This paper presents a framework for learning state and action abstractions in sequential decision-making domains.
Our framework, planning abstraction from language (PARL), utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space.
arXiv Detail & Related papers (2024-05-06T21:24:22Z) - VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning [66.23296689828152]
We leverage the capabilities of Vision-and-Large-Language Models to enhance in-context emotion classification.<n>First, we prompt VLLMs to generate natural language descriptions of the subject's apparent emotion in relation to the visual context.<n>Second, the descriptions, along with the visual input, are used to train a transformer-based architecture.
arXiv Detail & Related papers (2024-04-10T15:09:15Z) - ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided
Code-Vision Representation [82.88378582161717]
State-of-the-art vision-language models (VLMs) still have limited performance in structural knowledge extraction.
We present ViStruct, a training framework to learn VLMs for effective visual structural knowledge extraction.
arXiv Detail & Related papers (2023-11-22T09:23:34Z) - Embodied Concept Learner: Self-supervised Learning of Concepts and
Mapping through Instruction Following [101.55727845195969]
We propose Embodied Learner Concept (ECL) in an interactive 3D environment.
A robot agent can ground visual concepts, build semantic maps and plan actions to complete tasks.
ECL is fully transparent and step-by-step interpretable in long-term planning.
arXiv Detail & Related papers (2023-04-07T17:59:34Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - Nested and Balanced Entity Recognition using Multi-Task Learning [0.0]
This paper introduces a partly-layered network architecture that deals with the complexity of overlapping and nested cases.
We train and evaluate this architecture to recognise two kinds of entities - Concepts (CR) and Named Entities (NER)
Our approach achieves state-of-the-art NER performances, while it outperforms previous CR approaches.
arXiv Detail & Related papers (2021-06-11T07:52:32Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Weakly Supervised Temporal Action Localization Through Learning Explicit
Subspaces for Action and Context [151.23835595907596]
Methods learn to localize temporal starts and ends of action instances in a video under only video-level supervision.
We introduce a framework that learns two feature subspaces respectively for actions and their context.
The proposed approach outperforms state-of-the-art WS-TAL methods on three benchmarks.
arXiv Detail & Related papers (2021-03-30T08:26:53Z) - Bidirectional Graph Reasoning Network for Panoptic Segmentation [126.06251745669107]
We introduce a Bidirectional Graph Reasoning Network (BGRNet) to mine the intra-modular and intermodular relations within and between foreground things and background stuff classes.
BGRNet first constructs image-specific graphs in both instance and semantic segmentation branches that enable flexible reasoning at the proposal level and class level.
arXiv Detail & Related papers (2020-04-14T02:32:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.