Structure Editor for Building Software Models
- URL: http://arxiv.org/abs/2406.09524v1
- Date: Thu, 13 Jun 2024 18:21:02 GMT
- Title: Structure Editor for Building Software Models
- Authors: Mohammad Nurullah Patwary, Ana Jovanovic, Allison Sullivan,
- Abstract summary: A recent study of over 93,000 new user models reveals that users have trouble from the very start: nearly a third of the models novices write fail to compile.
We believe that the issue is that Alloy's grammar and type information is passively relayed to the user despite this information outlining a narrow path for how to compose valid formulas.
In this paper, we outline a proof-of-concept for a structure editor for Alloy in which user's build their models using block based inputs, rather than free typing, which by design prevents compilation errors.
- Score: 0.5735035463793009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Alloy is well known a declarative modeling language. A key strength of Alloy is its scenario finding toolset, the Analyzer, which allows users to explore all valid scenarios that adhere to the model's constraints up to a user-provided scope. Despite the Analyzer, Alloy is still difficult for novice users to learn and use. A recent empirical study of over 93,000 new user models reveals that users have trouble from the very start: nearly a third of the models novices write fail to compile. We believe that the issue is that Alloy's grammar and type information is passively relayed to the user despite this information outlining a narrow path for how to compose valid formulas. In this paper, we outline a proof-of-concept for a structure editor for Alloy in which user's build their models using block based inputs, rather than free typing, which by design prevents compilation errors.
Related papers
- Detecting Edited Knowledge in Language Models [5.260519479124422]
Knowledge editing methods (KEs) can update language models' obsolete or inaccurate knowledge learned from pre-training.
Knowing whether a generated output is based on edited knowledge or first-hand knowledge from pre-training can increase users' trust in generative models.
We propose a novel task: detecting edited knowledge in language models.
arXiv Detail & Related papers (2024-05-04T22:02:24Z) - Robust and Scalable Model Editing for Large Language Models [75.95623066605259]
We propose EREN (Edit models by REading Notes) to improve the scalability and robustness of LLM editing.
Unlike existing techniques, it can integrate knowledge from multiple edits, and correctly respond to syntactically similar but semantically unrelated inputs.
arXiv Detail & Related papers (2024-03-26T06:57:23Z) - Right or Wrong -- Understanding How Novice Users Write Software Models [0.6445605125467574]
This paper presents an empirical study of over 97,000 models written by novice users trying to learn Alloy.
We investigate how users write both correct and incorrect models in order to produce a comprehensive benchmark for future use.
arXiv Detail & Related papers (2024-02-09T18:56:57Z) - Clarify: Improving Model Robustness With Natural Language Corrections [59.041682704894555]
The standard way to teach models is by feeding them lots of data.
This approach often teaches models incorrect ideas because they pick up on misleading signals in the data.
We propose Clarify, a novel interface and method for interactively correcting model misconceptions.
arXiv Detail & Related papers (2024-02-06T05:11:38Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Crucible: Graphical Test Cases for Alloy Models [0.76146285961466]
This paper introduces Crucible, which allows users to graphically create AUnit test cases.
Crucible provides automated guidance to users to ensure they are creating well structured, valuable test cases.
arXiv Detail & Related papers (2023-07-13T17:43:12Z) - Interactive Text Generation [75.23894005664533]
We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users.
We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts.
arXiv Detail & Related papers (2023-03-02T01:57:17Z) - R-U-SURE? Uncertainty-Aware Code Suggestions By Maximizing Utility
Across Random User Intents [14.455036827804541]
Large language models show impressive results at predicting structured text such as code, but also commonly introduce errors and hallucinations in their output.
We propose Randomized Utility-driven Synthesis of Uncertain REgions (R-U-SURE)
R-U-SURE is an approach for building uncertainty-aware suggestions based on a decision-theoretic model of goal-conditioned utility.
arXiv Detail & Related papers (2023-03-01T18:46:40Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Intellige: A User-Facing Model Explainer for Narrative Explanations [0.0]
We propose Intellige, a user-facing model explainer that creates user-digestible interpretations and insights.
Intellige builds an end-to-end pipeline from machine learning platforms to end user platforms.
arXiv Detail & Related papers (2021-05-27T05:11:47Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.