Systematic teaching of UML and behavioral diagrams
- URL: http://arxiv.org/abs/2410.17849v1
- Date: Wed, 23 Oct 2024 13:18:54 GMT
- Title: Systematic teaching of UML and behavioral diagrams
- Authors: Anja Metzner,
- Abstract summary: This article discusses the systematic acquisition of skills required for creating diagrams.
The more unusual question types are related to images, such as questions about image annotation.
All the demonstrated exercises are suitable for both digital and handwritten training or exams.
- Score: 0.0
- License:
- Abstract: When studying software engineering, learning to create UML diagrams is crucial. Similar to how an architect would never build a house without a building plan, designing software architectures is important for developing high-quality software. UML diagrams are a standardized notation for the visualization of software architectures and software behavior. The research question that inspired this work was how to effectively evaluate hand-drawn diagrams without relying on model parsers. The findings of this investigation are presented in this paper. This article discusses the systematic acquisition of skills required for creating UML diagrams. Especially well-formed activity diagrams are one highlight. Additionally, the paper provides a variety of exercises. The exercises use recommended question types. The more unusual question types are related to images, such as questions about image annotation, finding hotspots on an image and positioning a target on an image. All the demonstrated exercises are suitable for both digital and handwritten training or exams.
Related papers
- Can Large Language Models Understand Symbolic Graphics Programs? [136.5639211254501]
Symbolic graphics programs are popular in computer graphics.
We create a benchmark for the semantic visual understanding of symbolic graphics programs.
We find that LLMs considered stronger at reasoning generally perform better.
arXiv Detail & Related papers (2024-08-15T17:59:57Z) - Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model [41.103167385290085]
We design a multi-modal self-instruct, utilizing large language models and their code capabilities to synthesize massive abstract images and visual reasoning instructions.
Our benchmark, constructed with simple lines and geometric elements, exposes the shortcomings of most advanced LMMs.
To verify the quality of our synthetic data, we fine-tune an LMM using 62,476 synthetic chart, table and road map instructions.
arXiv Detail & Related papers (2024-07-09T17:18:27Z) - Ask Questions with Double Hints: Visual Question Generation with Answer-awareness and Region-reference [107.53380946417003]
We propose a novel learning paradigm to generate visual questions with answer-awareness and region-reference.
We develop a simple methodology to self-learn the visual hints without introducing any additional human annotations.
arXiv Detail & Related papers (2024-07-06T15:07:32Z) - From Image to UML: First Results of Image Based UML Diagram Generation Using LLMs [1.961305559606562]
In software engineering processes, systems are first specified using a modeling language.
Large Language Models (LLM) are used to generate the formal representation of (UML) models from a given drawing.
More specifically, we have evaluated the capabilities of different LLMs to convert images of class diagrams into the actual models represented in the images.
arXiv Detail & Related papers (2024-04-17T13:33:11Z) - A Picture Is Worth a Thousand Words: Exploring Diagram and Video-Based
OOP Exercises to Counter LLM Over-Reliance [2.1490831374964587]
Large language models (LLMs) can effectively solve a range of more complex object-oriented programming (OOP) exercises with text-based specifications.
This raises concerns about academic integrity, as students might use these models to complete assignments unethically.
We propose an innovative approach to formulating OOP tasks using diagrams and videos, as a way to foster problem-solving and deter students from a copy-and-prompt approach in OOP courses.
arXiv Detail & Related papers (2024-03-13T10:21:29Z) - mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large
Language Model [73.38800189095173]
This work focuses on strengthening the multi-modal diagram analysis ability of Multimodal LLMs.
By parsing Latex source files of high-quality papers, we carefully build a multi-modal diagram understanding dataset M-Paper.
M-Paper is the first dataset to support joint comprehension of multiple scientific diagrams, including figures and tables in the format of images or Latex codes.
arXiv Detail & Related papers (2023-11-30T04:43:26Z) - DiagrammerGPT: Generating Open-Domain, Open-Platform Diagrams via LLM Planning [62.51232333352754]
Text-to-image (T2I) generation has seen significant growth over the past few years.
Despite this, there has been little work on generating diagrams with T2I models.
We present DiagrammerGPT, a novel two-stage text-to-diagram generation framework.
We show that our framework produces more accurate diagrams, outperforming existing T2I models.
arXiv Detail & Related papers (2023-10-18T17:37:10Z) - InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists [66.85125112199898]
We develop a unified language interface for computer vision tasks that abstracts away task-specific design choices.
Our model, dubbed InstructCV, performs competitively compared to other generalist and task-specific vision models.
arXiv Detail & Related papers (2023-09-30T14:26:43Z) - SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense
Reasoning [61.57887011165744]
multimodal Transformers have made great progress in the task of Visual Commonsense Reasoning.
We propose a Scene Graph Enhanced Image-Text Learning framework to incorporate visual scene graphs in commonsense reasoning.
arXiv Detail & Related papers (2021-12-16T03:16:30Z) - Understanding the Role of Scene Graphs in Visual Question Answering [26.02889386248289]
We conduct experiments on the GQA dataset which presents a challenging set of questions requiring counting, compositionality and advanced reasoning capability.
We adopt image + question architectures for use with scene graphs, evaluate various scene graph generation techniques for unseen images, propose a training curriculum to leverage human-annotated and auto-generated scene graphs.
We present a multi-faceted study into the use of scene graphs for Visual Question Answering, making this work the first of its kind.
arXiv Detail & Related papers (2021-01-14T07:27:37Z) - Classification of Reverse-Engineered Class Diagram and
Forward-Engineered Class Diagram using Machine Learning [0.0]
In software industry it is important to know which type of class diagram it is.
Which diagram was used in a particular project is an important factor to be known?
We propose to solve this problem by using a supervised Machine Learning technique.
arXiv Detail & Related papers (2020-11-14T14:56:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.