Android in the Wild: A Large-Scale Dataset for Android Device Control
- URL: http://arxiv.org/abs/2307.10088v2
- Date: Fri, 27 Oct 2023 14:24:31 GMT
- Title: Android in the Wild: A Large-Scale Dataset for Android Device Control
- Authors: Christopher Rawles, Alice Li, Daniel Rodriguez, Oriana Riva, Timothy
Lillicrap
- Abstract summary: We present a dataset for device-control research, Android in the Wild (AITW)
The dataset contains human demonstrations of device interactions, including the screens and actions, and corresponding natural language instructions.
It consists of 715k episodes spanning 30k unique instructions, four versions of Android (v10-13),and eight device types (Pixel 2 XL to Pixel 6) with varying screen resolutions.
- Score: 4.973591165982018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a growing interest in device-control systems that can interpret
human natural language instructions and execute them on a digital device by
directly controlling its user interface. We present a dataset for
device-control research, Android in the Wild (AITW), which is orders of
magnitude larger than current datasets. The dataset contains human
demonstrations of device interactions, including the screens and actions, and
corresponding natural language instructions. It consists of 715k episodes
spanning 30k unique instructions, four versions of Android (v10-13),and eight
device types (Pixel 2 XL to Pixel 6) with varying screen resolutions. It
contains multi-step tasks that require semantic understanding of language and
visual context. This dataset poses a new challenge: actions available through
the user interface must be inferred from their visual appearance. And, instead
of simple UI element-based actions, the action space consists of precise
gestures (e.g., horizontal scrolls to operate carousel widgets). We organize
our dataset to encourage robustness analysis of device-control systems, i.e.,
how well a system performs in the presence of new task descriptions, new
applications, or new platform versions. We develop two agents and report
performance across the dataset. The dataset is available at
https://github.com/google-research/google-research/tree/master/android_in_the_wild.
Related papers
- AutoGUI: Scaling GUI Grounding with Automatic Functionality Annotations from LLMs [54.58905728115257]
We propose the methodname pipeline for automatically annotating UI elements with detailed functionality descriptions at scale.
Specifically, we leverage large language models (LLMs) to infer element functionality by comparing the UI content changes before and after simulated interactions with specific UI elements.
We construct an methodname-704k dataset using the proposed pipeline, featuring multi-resolution, multi-device screenshots, diverse data domains, and detailed functionality annotations that have never been provided by previous datasets.
arXiv Detail & Related papers (2025-02-04T03:39:59Z) - Falcon-UI: Understanding GUI Before Following User Instructions [57.67308498231232]
We introduce an instruction-free GUI navigation dataset, termed Insight-UI dataset, to enhance model comprehension of GUI environments.
Insight-UI dataset is automatically generated from the Common Crawl corpus, simulating various platforms.
We develop the GUI agent model Falcon-UI, which is initially pretrained on Insight-UI dataset and subsequently fine-tuned on Android and Web GUI datasets.
arXiv Detail & Related papers (2024-12-12T15:29:36Z) - AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents [50.39555842254652]
We introduce the Android Multi-annotation EXpo (AMEX) to advance research on AI agents in mobile scenarios.
AMEX comprises over 104K high-resolution screenshots from 110 popular mobile applications, which are annotated at multiple levels.
AMEX includes three levels of annotations: GUI interactive element grounding, GUI screen and element functionality descriptions, and complex natural language instructions.
arXiv Detail & Related papers (2024-07-03T17:59:58Z) - Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild [66.34146236875822]
The Nymeria dataset is a large-scale, diverse, richly annotated human motion dataset collected in the wild with multiple multimodal egocentric devices.
It contains 1200 recordings of 300 hours of daily activities from 264 participants across 50 locations, travelling a total of 399Km.
The motion-language descriptions provide 310.5K sentences in 8.64M words from a vocabulary size of 6545.
arXiv Detail & Related papers (2024-06-14T10:23:53Z) - Training a Vision Language Model as Smartphone Assistant [1.3654846342364308]
We present a visual language model (VLM) that can fulfill diverse tasks on mobile devices.
Our model functions by interacting solely with the user interface (UI)
Unlike previous methods, our model operates not only on a single screen image but on vision-language sentences created from sequences of past screenshots.
arXiv Detail & Related papers (2024-04-12T18:28:44Z) - A Pairwise Dataset for GUI Conversion and Retrieval between Android
Phones and Tablets [24.208087862974033]
Papt dataset is a pairwise dataset for GUI conversion and retrieval between Android phones and tablets.
dataset contains 10,035 phone-tablet GUI page pairs from 5,593 phone-tablet app pairs.
arXiv Detail & Related papers (2023-07-25T03:25:56Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z) - ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in
Dynamic Environments [85.81157224163876]
We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigation-and-assembly task, named ArraMon.
During this task, the agent is asked to find and collect different target objects one-by-one by navigating based on natural language instructions in a complex, realistic outdoor environment.
We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work.
arXiv Detail & Related papers (2020-11-15T23:30:36Z) - Intent Detection with WikiHow [28.28719498563396]
Our models are able to predict a broad range of intended goals from many actions because they are trained on wikiHow.
Our models achieve state-of-the-art results on the Snips dataset, theGuided Dialogue dataset, and all 3 languages of the Facebook multilingual dialog datasets.
arXiv Detail & Related papers (2020-09-12T12:53:47Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - The IKEA ASM Dataset: Understanding People Assembling Furniture through
Actions, Objects and Pose [108.21037046507483]
IKEA ASM is a three million frame, multi-view, furniture assembly video dataset that includes depth, atomic actions, object segmentation, and human pose.
We benchmark prominent methods for video action recognition, object segmentation and human pose estimation tasks on this challenging dataset.
The dataset enables the development of holistic methods, which integrate multi-modal and multi-view data to better perform on these tasks.
arXiv Detail & Related papers (2020-07-01T11:34:46Z) - Mapping Natural Language Instructions to Mobile UI Action Sequences [17.393816815196974]
We present a new problem: grounding natural language instructions to mobile user interface actions.
We create PIXELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator.
To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces.
arXiv Detail & Related papers (2020-05-07T21:41:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.