WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
- URL: http://arxiv.org/abs/2402.05930v1
- Date: Thu, 8 Feb 2024 18:58:02 GMT
- Title: WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
- Authors: Xing Han L\`u, Zden\v{e}k Kasner, Siva Reddy
- Abstract summary: We introduce WEBLINX - a benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation.
Our benchmark covers a broad range of patterns on over 150 real-world websites and can be used to train and evaluate agents in diverse scenarios.
We use the selected elements, along with screenshots and action history, to assess a variety of models for their ability to replicate human behavior when navigating the web.
- Score: 29.217609047657188
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose the problem of conversational web navigation, where a digital
agent controls a web browser and follows user instructions to solve real-world
tasks in a multi-turn dialogue fashion. To support this problem, we introduce
WEBLINX - a large-scale benchmark of 100K interactions across 2300 expert
demonstrations of conversational web navigation. Our benchmark covers a broad
range of patterns on over 150 real-world websites and can be used to train and
evaluate agents in diverse scenarios. Due to the magnitude of information
present, Large Language Models (LLMs) cannot process entire web pages in
real-time. To solve this bottleneck, we design a retrieval-inspired model that
efficiently prunes HTML pages by ranking relevant elements. We use the selected
elements, along with screenshots and action history, to assess a variety of
models for their ability to replicate human behavior when navigating the web.
Our experiments span from small text-only to proprietary multimodal LLMs. We
find that smaller finetuned decoders surpass the best zero-shot LLMs (including
GPT-4V), but also larger finetuned multimodal models which were explicitly
pretrained on screenshots. However, all finetuned models struggle to generalize
to unseen websites. Our findings highlight the need for large multimodal models
that can generalize to novel settings. Our code, data and models are available
for research: https://mcgill-nlp.github.io/weblinx
Related papers
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs [112.89665642941814]
Multimodal large language models (MLLMs) have shown impressive success across modalities such as image, video, and audio.
Current MLLMs are surprisingly poor at understanding webpage screenshots and generating their corresponding HTML code.
We propose a benchmark consisting of a new large-scale webpage-to-code dataset for instruction tuning.
arXiv Detail & Related papers (2024-06-28T17:59:46Z) - MMInA: Benchmarking Multihop Multimodal Internet Agents [36.173995299002]
We present MMInA, a multihop and multimodal benchmark to evaluate the embodied agents for compositional Internet tasks.
Our data includes 1,050 human-written tasks covering various domains such as shopping and travel.
Our method significantly improved both the single-hop and multihop web browsing abilities of agents.
arXiv Detail & Related papers (2024-04-15T17:59:50Z) - VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding? [115.60866817774641]
Multimodal Large Language models (MLLMs) have shown promise in web-related tasks.
evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks.
bench is a multimodal benchmark designed to assess the capabilities of MLLMs across a variety of web tasks.
arXiv Detail & Related papers (2024-04-09T02:29:39Z) - AutoWebGLM: A Large Language Model-based Web Navigating Agent [33.55199326570078]
We develop the open AutoWebGLM based on ChatGLM3-6B.
Inspired by human browsing patterns, we first design an HTML simplification algorithm to represent webpages.
We then employ a hybrid human-AI method to build web browsing data for curriculum training.
arXiv Detail & Related papers (2024-04-04T17:58:40Z) - Tur[k]ingBench: A Challenge Benchmark for Web Agents [41.852615330725655]
TurkingBench is a benchmark consisting of tasks presented as web pages with textual instructions and multi-modal contexts.
This benchmark includes 32.2K instances spread across 158 tasks.
We assess the performance of cutting-edge private and open-source models, including language-only and vision-language models.
arXiv Detail & Related papers (2024-03-18T16:06:30Z) - Design2Code: Benchmarking Multimodal Code Generation for Automated Front-End Engineering [74.99736967448423]
We construct Design2Code - the first real-world benchmark for this task.
We manually curate 484 diverse real-world webpages as test cases and develop a set of automatic evaluation metrics.
Our fine-grained break-down metrics indicate that models mostly lag in recalling visual elements from the input webpages and generating correct layout designs.
arXiv Detail & Related papers (2024-03-05T17:56:27Z) - VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks [93.85005277463802]
VisualWebArena is a benchmark designed to assess the performance of multimodal web agents on realistic tasks.
To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives.
arXiv Detail & Related papers (2024-01-24T18:35:21Z) - Mind2Web: Towards a Generalist Agent for the Web [25.363429937913065]
Mind2Web is the first dataset for developing and evaluating generalist agents for the web.
With over 2,000 open-ended tasks collected from 137 websites spanning 31 domains, Mind2Web provides three necessary ingredients for building generalist web agents.
Based on Mind2Web, we conduct an initial exploration of using large language models (LLMs) for building generalist web agents.
arXiv Detail & Related papers (2023-06-09T17:44:31Z) - Multimodal Web Navigation with Instruction-Finetuned Foundation Models [99.14209521903854]
We study data-driven offline training for web agents with vision-language foundation models.
We propose an instruction-following multimodal agent, WebGUM, that observes both webpage screenshots and HTML pages.
We empirically demonstrate this recipe improves the agent's ability of grounded multimodal perception, HTML comprehension, and multi-step reasoning.
arXiv Detail & Related papers (2023-05-19T17:44:34Z) - PaLM-E: An Embodied Multimodal Language Model [101.29116156731762]
We propose embodied language models to incorporate real-world continuous sensor modalities into language models.
We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks.
Our largest model, PaLM-E-562B with 562B parameters, is a visual-language generalist with state-of-the-art performance on OK-VQA.
arXiv Detail & Related papers (2023-03-06T18:58:06Z) - Understanding HTML with Large Language Models [73.92747433749271]
Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks.
We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks.
We show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks.
arXiv Detail & Related papers (2022-10-08T07:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.