CogAgent: A Visual Language Model for GUI Agents
- URL: http://arxiv.org/abs/2312.08914v2
- Date: Thu, 21 Dec 2023 09:41:25 GMT
- Title: CogAgent: A Visual Language Model for GUI Agents
- Authors: Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui
Ji, Yan Wang, Zihan Wang, Yuxuan Zhang, Juanzi Li, Bin Xu, Yuxiao Dong, Ming
Ding, Jie Tang
- Abstract summary: We introduce CogAgent, a visual language model (VLM) specializing in GUI understanding and navigation.
By utilizing both low-resolution and high-resolution image encoders, CogAgent supports input at a resolution of 1120*1120.
CogAgent achieves the state of the art on five text-rich and four general VQA benchmarks, including VQAv2, OK-VQA, Text-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE.
- Score: 61.26491779502794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People are spending an enormous amount of time on digital devices through
graphical user interfaces (GUIs), e.g., computer or smartphone screens. Large
language models (LLMs) such as ChatGPT can assist people in tasks like writing
emails, but struggle to understand and interact with GUIs, thus limiting their
potential to increase automation levels. In this paper, we introduce CogAgent,
an 18-billion-parameter visual language model (VLM) specializing in GUI
understanding and navigation. By utilizing both low-resolution and
high-resolution image encoders, CogAgent supports input at a resolution of
1120*1120, enabling it to recognize tiny page elements and text. As a
generalist visual language model, CogAgent achieves the state of the art on
five text-rich and four general VQA benchmarks, including VQAv2, OK-VQA,
Text-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. CogAgent, using
only screenshots as input, outperforms LLM-based methods that consume extracted
HTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW,
advancing the state of the art. The model and codes are available at
https://github.com/THUDM/CogVLM .
Related papers
- ShowUI: One Vision-Language-Action Model for GUI Visual Agent [80.50062396585004]
Building Graphical User Interface (GUI) assistants holds significant promise for enhancing human workflow productivity.
We develop a vision-language-action model in digital world, namely ShowUI, which features the following innovations.
ShowUI, a lightweight 2B model using 256K data, achieves a strong 75.1% accuracy in zero-shot screenshot grounding.
arXiv Detail & Related papers (2024-11-26T14:29:47Z) - ClickAgent: Enhancing UI Location Capabilities of Autonomous Agents [0.0]
ClickAgent is a novel framework for building autonomous agents.
In ClickAgent, the MLLM handles reasoning and action planning, while a separate UI location model identifies the relevant UI elements on the screen.
Our evaluation was conducted on both an Android smartphone emulator and an actual Android smartphone, using the task success rate as the key metric for measuring agent performance.
arXiv Detail & Related papers (2024-10-09T14:49:02Z) - MobileFlow: A Multimodal LLM For Mobile GUI Agent [4.7619361168442005]
This paper introduces MobileFlow, a multimodal large language model meticulously crafted for mobile GUI agents.
MobileFlow contains approximately 21 billion parameters and is equipped with novel hybrid visual encoders.
It has the capacity to fully interpret image data and comprehend user instructions for GUI interaction tasks.
arXiv Detail & Related papers (2024-07-05T08:37:10Z) - Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding [30.624179161014283]
We propose a Tree-of-Lens (ToL) agent, utilizing a novel ToL grounding mechanism, to address the ScreenPR task.
Based on the input point coordinate and the corresponding GUI screenshot, our ToL agent constructs a Hierarchical Layout Tree.
Based on the tree, our ToL agent not only comprehends the content of the indicated area but also articulates the layout and spatial relationships between elements.
arXiv Detail & Related papers (2024-06-27T15:34:16Z) - GUICourse: From General Vision Language Models to Versatile GUI Agents [75.5150601913659]
We contribute GUICourse, a suite of datasets to train visual-based GUI agents.
First, we introduce the GUIEnv dataset to strengthen the OCR and grounding capabilities of VLMs.
Then, we introduce the GUIAct and GUIChat datasets to enrich their knowledge of GUI components and interactions.
arXiv Detail & Related papers (2024-06-17T08:30:55Z) - GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents [73.9254861755974]
This paper introduces a new dataset, called GUI-World, which features meticulously crafted Human-MLLM annotations.
We evaluate the capabilities of current state-of-the-art MLLMs, including ImageLLMs and VideoLLMs, in understanding various types of GUI content.
arXiv Detail & Related papers (2024-06-16T06:56:53Z) - CoCo-Agent: A Comprehensive Cognitive MLLM Agent for Smartphone GUI Automation [61.68049335444254]
Multimodal large language models (MLLMs) have shown remarkable potential as human-like autonomous language agents to interact with real-world environments.
We propose a Comprehensive Cognitive LLM Agent, CoCo-Agent, with two novel approaches, comprehensive environment perception (CEP) and conditional action prediction (CAP)
With our technical design, our agent achieves new state-of-the-art performance on AITW and META-GUI benchmarks, showing promising abilities in realistic scenarios.
arXiv Detail & Related papers (2024-02-19T08:29:03Z) - ScreenAgent: A Vision Language Model-driven Computer Control Agent [17.11085071288194]
We build an environment for a Vision Language Model (VLM) agent to interact with a real computer screen.
Within this environment, the agent can observe screenshots and manipulate the Graphics User Interface (GUI) by outputting mouse and keyboard actions.
We construct the ScreenAgent dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks.
arXiv Detail & Related papers (2024-02-09T02:33:45Z) - TAG: Boosting Text-VQA via Text-aware Visual Question-answer Generation [55.83319599681002]
Text-VQA aims at answering questions that require understanding the textual cues in an image.
We develop a new method to generate high-quality and diverse QA pairs by explicitly utilizing the existing rich text available in the scene context of each image.
arXiv Detail & Related papers (2022-08-03T02:18:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.