Automatic code generation from sketches of mobile applications in
end-user development using Deep Learning
- URL: http://arxiv.org/abs/2103.05704v1
- Date: Tue, 9 Mar 2021 20:32:20 GMT
- Title: Automatic code generation from sketches of mobile applications in
end-user development using Deep Learning
- Authors: Daniel Baul\'e, Christiane Gresse von Wangenheim, Aldo von Wangenheim,
Jean C. R. Hauck, Edson C. Vargas J\'unior
- Abstract summary: A common need for mobile application development is to transform a sketch of a user interface into a wireframe code-frame using App Inventor.
Sketch2aia employs deep learning to detect the most frequent user interface components and their position on a hand-drawn sketch.
- Score: 1.714936492787201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common need for mobile application development by end-users or in computing
education is to transform a sketch of a user interface into wireframe code
using App Inventor, a popular block-based programming environment. As this task
is challenging and time-consuming, we present the Sketch2aia approach that
automates this process. Sketch2aia employs deep learning to detect the most
frequent user interface components and their position on a hand-drawn sketch
creating an intermediate representation of the user interface and then
automatically generates the App Inventor code of the wireframe. The approach
achieves an average user interface component classification accuracy of 87,72%
and results of a preliminary user evaluation indicate that it generates
wireframes that closely mirror the sketches in terms of visual similarity. The
approach has been implemented as a web tool and can be used to support the
end-user development of mobile applications effectively and efficiently as well
as the teaching of user interface design in K-12.
Related papers
- UX Heuristics and Checklist for Deep Learning powered Mobile
Applications with Image Classification [1.2437226707039446]
This study examines existing mobile applications with image classification and develops an initial set of AIXs for Deep Learning powered mobile applications with image classification decomposed into a checklist.
In order to facilitate the usage of the checklist we also developed an online course presenting the concepts and conductions as well as a web-based tool in order to support an evaluation using theses.
arXiv Detail & Related papers (2023-07-05T20:23:34Z) - Enhancing Virtual Assistant Intelligence: Precise Area Targeting for
Instance-level User Intents beyond Metadata [18.333599919653444]
We study virtual assistants capable of processing instance-level user intents based on pixels of application screens.
We propose a novel cross-modal deep learning pipeline, which understands the input vocal or textual instance-level user intents.
We conducted a user study with 10 participants to collect a testing dataset with instance-level user intents.
arXiv Detail & Related papers (2023-06-07T05:26:38Z) - From Pixels to UI Actions: Learning to Follow Instructions via Graphical
User Interfaces [66.85108822706489]
This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use.
It is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
arXiv Detail & Related papers (2023-05-31T23:39:18Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement
Learning [91.58711082348293]
Reinforcement learning from online user feedback on the system's performance presents a natural solution to this problem.
This approach tends to require a large amount of human-in-the-loop training data, especially when feedback is sparse.
We propose a hierarchical solution that learns efficiently from sparse user feedback.
arXiv Detail & Related papers (2022-02-05T02:01:19Z) - Intelligent Exploration for User Interface Modules of Mobile App with
Collective Learning [44.23872832648518]
FEELER is a framework to explore design solutions of user interface modules with a collective machine learning approach.
We conducted extensive experiments on two real-life datasets to demonstrate its applicability in real-life cases of user interface module design.
arXiv Detail & Related papers (2020-07-21T19:00:54Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.