Intelligent Exploration for User Interface Modules of Mobile App with
Collective Learning
- URL: http://arxiv.org/abs/2007.14767v2
- Date: Mon, 31 Aug 2020 17:28:02 GMT
- Title: Intelligent Exploration for User Interface Modules of Mobile App with
Collective Learning
- Authors: Jingbo Zhou, Zhenwei Tang, Min Zhao, Xiang Ge, Fuzhen Zhuang, Meng
Zhou, Liming Zou, Chenglei Yang, Hui Xiong
- Abstract summary: FEELER is a framework to explore design solutions of user interface modules with a collective machine learning approach.
We conducted extensive experiments on two real-life datasets to demonstrate its applicability in real-life cases of user interface module design.
- Score: 44.23872832648518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A mobile app interface usually consists of a set of user interface modules.
How to properly design these user interface modules is vital to achieving user
satisfaction for a mobile app. However, there are few methods to determine
design variables for user interface modules except for relying on the judgment
of designers. Usually, a laborious post-processing step is necessary to verify
the key change of each design variable. Therefore, there is a only very limited
amount of design solutions that can be tested. It is timeconsuming and almost
impossible to figure out the best design solutions as there are many modules.
To this end, we introduce FEELER, a framework to fast and intelligently explore
design solutions of user interface modules with a collective machine learning
approach. FEELER can help designers quantitatively measure the preference score
of different design solutions, aiming to facilitate the designers to
conveniently and quickly adjust user interface module. We conducted extensive
experimental evaluations on two real-life datasets to demonstrate its
applicability in real-life cases of user interface module design in the Baidu
App, which is one of the most popular mobile apps in China.
Related papers
- Leveraging Multimodal LLM for Inspirational User Interface Search [12.470067381902972]
Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps.
We used a multimodal large language model (MLLM) to extract and interpret semantics from mobile UI images.
Our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience.
arXiv Detail & Related papers (2025-01-29T17:38:39Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Large Language User Interfaces: Voice Interactive User Interfaces powered by LLMs [5.06113628525842]
We present a framework that can serve as an intermediary between a user and their user interface (UI)
We employ a system that stands upon textual semantic mappings of UI components, in the form of annotations.
Our engine can classify the most appropriate application, extract relevant parameters, and subsequently execute precise predictions of the user's expected actions.
arXiv Detail & Related papers (2024-02-07T21:08:49Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation [17.279875204729553]
Zero-Shot Object Navigation (ZSON) enables agents to navigate towards open-vocabulary objects in unknown environments.
We introduce ZIPON, where robots need to navigate to personalized goal objects while engaging in conversations with users.
We propose Open-woRld Interactive persOnalized Navigation (ORION) to make sequential decisions to manipulate different modules for perception, navigation and communication.
arXiv Detail & Related papers (2023-10-12T01:17:56Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - Automatic code generation from sketches of mobile applications in
end-user development using Deep Learning [1.714936492787201]
A common need for mobile application development is to transform a sketch of a user interface into a wireframe code-frame using App Inventor.
Sketch2aia employs deep learning to detect the most frequent user interface components and their position on a hand-drawn sketch.
arXiv Detail & Related papers (2021-03-09T20:32:20Z) - VINS: Visual Search for Mobile User Interface Design [66.28088601689069]
This paper introduces VINS, a visual search framework, that takes as input a UI image and retrieves visually similar design examples.
The framework achieves a mean Average Precision of 76.39% for the UI detection and high performance in querying similar UI designs.
arXiv Detail & Related papers (2021-02-10T01:46:33Z) - Personalized Adaptive Meta Learning for Cold-start User Preference
Prediction [46.65783845757707]
A common challenge in personalized user preference prediction is the cold-start problem.
We propose a novel personalized adaptive meta learning approach to consider both the major and the minor users.
Our method outperforms the state-of-the-art methods dramatically for both the minor and major users.
arXiv Detail & Related papers (2020-12-22T05:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.