AppBuddy: Learning to Accomplish Tasks in Mobile Apps via Reinforcement
Learning
- URL: http://arxiv.org/abs/2106.00133v1
- Date: Mon, 31 May 2021 23:02:38 GMT
- Title: AppBuddy: Learning to Accomplish Tasks in Mobile Apps via Reinforcement
Learning
- Authors: Maayan Shvo, Zhiming Hu, Rodrigo Toro Icarte, Iqbal Mohomed, Allan
Jepson, Sheila A. McIlraith
- Abstract summary: We introduce an RL-based framework for learning to accomplish tasks in mobile apps.
RL agents are provided with states derived from the underlying representation of on-screen elements.
We develop a platform which addresses several engineering challenges to enable an effective RL training environment.
- Score: 19.990946219992992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human beings, even small children, quickly become adept at figuring out how
to use applications on their mobile devices. Learning to use a new app is often
achieved via trial-and-error, accelerated by transfer of knowledge from past
experiences with like apps. The prospect of building a smarter smartphone - one
that can learn how to achieve tasks using mobile apps - is tantalizing. In this
paper we explore the use of Reinforcement Learning (RL) with the goal of
advancing this aspiration. We introduce an RL-based framework for learning to
accomplish tasks in mobile apps. RL agents are provided with states derived
from the underlying representation of on-screen elements, and rewards that are
based on progress made in the task. Agents can interact with screen elements by
tapping or typing. Our experimental results, over a number of mobile apps, show
that RL agents can learn to accomplish multi-step tasks, as well as achieve
modest generalization across different apps. More generally, we develop a
platform which addresses several engineering challenges to enable an effective
RL training environment. Our AppBuddy platform is compatible with OpenAI Gym
and includes a suite of mobile apps and benchmark tasks that supports a
diversity of RL research in the mobile app setting.
Related papers
- Foundations and Recent Trends in Multimodal Mobile Agents: A Survey [57.677161006710065]
Mobile agents are essential for automating tasks in complex and dynamic mobile environments.
Recent advancements enhance real-time adaptability and multimodal interaction.
We categorize these advancements into two main approaches: prompt-based methods and training-based methods.
arXiv Detail & Related papers (2024-11-04T11:50:58Z) - MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents [7.4568642040547894]
Large language model (LLM)-based mobile agents are increasingly popular due to their capability to interact directly with mobile phone Graphic User Interfaces (GUIs)
Despite their promising prospects in both academic and industrial sectors, little research has focused on benchmarking the performance of existing mobile agents.
We propose an efficient and user-friendly benchmark, MobileAgentBench, designed to alleviate the burden of extensive manual testing.
arXiv Detail & Related papers (2024-06-12T13:14:50Z) - Benchmarking Mobile Device Control Agents across Diverse Configurations [19.01954948183538]
B-MoCA is a benchmark for evaluating and developing mobile device control agents.
We benchmark diverse agents, including agents employing large language models (LLMs) or multi-modal LLMs.
While these agents demonstrate proficiency in executing straightforward tasks, their poor performance on complex tasks highlights significant opportunities for future research to improve effectiveness.
arXiv Detail & Related papers (2024-04-25T14:56:32Z) - AppAgent: Multimodal Agents as Smartphone Users [23.318925173980446]
Our framework enables the agent to operate smartphone applications through a simplified action space.
The agent learns to navigate and use new apps either through autonomous exploration or by observing human demonstrations.
To demonstrate the practicality of our agent, we conducted extensive testing over 50 tasks in 10 different applications.
arXiv Detail & Related papers (2023-12-21T11:52:45Z) - A systematic literature review on the development and use of mobile
learning (web) apps by early adopters [0.0]
More and more teachers are developing their own apps to address issues not covered by existing m-learning apps.
Our results show that apps have been used both out of the classroom to develop autonomous learning or field trips, and in the classroom, mainly, for collaborative activities.
arXiv Detail & Related papers (2022-12-27T13:19:58Z) - Authoring Platform for Mobile Citizen Science Apps with Client-side ML [0.0]
A significant portion of citizen science projects depends on visual data, where photos or videos of different subjects are needed.
In this article, we introduce an authoring platform for easily creating mobile apps for citizen science projects.
The apps created with our platform can help participants recognize the correct data and increase the efficiency of the data collection process.
arXiv Detail & Related papers (2022-12-11T05:10:23Z) - Zero Experience Required: Plug & Play Modular Transfer Learning for
Semantic Visual Navigation [97.17517060585875]
We present a unified approach to visual navigation using a novel modular transfer learning model.
Our model can effectively leverage its experience from one source task and apply it to multiple target tasks.
Our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.
arXiv Detail & Related papers (2022-02-05T00:07:21Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.