Deep Reinforcement Learning for Automated Web GUI Testing
- URL: http://arxiv.org/abs/2504.19237v1
- Date: Sun, 27 Apr 2025 13:42:30 GMT
- Title: Deep Reinforcement Learning for Automated Web GUI Testing
- Authors: Zhiyu Gu, Chenxu Liu, Guoquan Wu, Yifei Zhang, ChenXi Yang, Zheheng Liang, Wei Chen, Jun Wei,
- Abstract summary: WebRLED is an effective approach for automated GUI testing of complex web applications.<n>WebRLED achieves higher code/state coverage and failure detection rate compared to existing state-of-the-art (SOTA) techniques.
- Score: 13.62121897768763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated GUI testing of web applications has always been considered a challenging task considering their large state space and complex interaction logic. Deep Reinforcement Learning (DRL) is a recent extension of Reinforcement Learning (RL), which takes advantage of the powerful learning capabilities of neural networks, making it suitable for complex exploration space. In this paper, leveraging the capability of deep reinforcement learning, we propose WebRLED, an effective approach for automated GUI testing of complex web applications. WebRLED has the following characteristics: (1) a grid-based action value learning technique, which can improve the efficiency of state space exploration; (2) a novel action discriminator which can be trained during the exploration to identify more actions; (3) an adaptive, curiosity-driven reward model, which considers the novelty of an explored state within an episode and global history, and can guide exploration continuously. We conduct a comprehensive evaluation of WebRLED on 12 open-source web applications and a field study of the top 50 most popular web applications in the world. The experimental results show that WebRLED achieves higher code/state coverage and failure detection rate compared to existing state-of-the-art (SOTA) techniques. Furthermore, WebRLED finds 695 unique failures in 50 real-world applications.
Related papers
- WebThinker: Empowering Large Reasoning Models with Deep Research Capability [60.81964498221952]
WebThinker is a deep research agent that empowers large reasoning models to autonomously search the web, navigate web pages, and draft research reports during the reasoning process.
It also employs an textbfAutonomous Think-Search-and-Draft strategy, allowing the model to seamlessly interleave reasoning, information gathering, and report writing in real time.
Our approach enhances LRM reliability and applicability in complex scenarios, paving the way for more capable and versatile deep research systems.
arXiv Detail & Related papers (2025-04-30T16:25:25Z) - DeepResearcher: Scaling Deep Research via Reinforcement Learning in Real-world Environments [20.498100965239818]
We introduce DeepResearcher, the first comprehensive framework for end-to-end training of LLM-based deep research agents.<n>Unlike RAG-based approaches that assume all necessary information exists within a fixed corpus, our method trains agents to navigate the noisy, unstructured, and dynamic nature of the open web.<n>Extensive experiments on open-domain research tasks demonstrate that DeepResearcher achieves substantial improvements of up to 28.9 points over prompt engineering-based baselines.
arXiv Detail & Related papers (2025-04-04T04:41:28Z) - Comprehensive Overview of Reward Engineering and Shaping in Advancing Reinforcement Learning Applications [0.0]
This paper emphasizes the importance of reward engineering and reward shaping in enhancing the efficiency and effectiveness of reinforcement learning algorithms.<n>Despite significant advancements in reinforcement learning, several limitations persist.<n>One key challenge is the sparse and delayed nature of rewards in many real-world scenarios.<n>The complexity of accurately modeling real-world environments and the computational demands of reinforcement learning algorithms remain substantial obstacles.
arXiv Detail & Related papers (2024-07-22T09:28:12Z) - Affordance-Guided Reinforcement Learning via Visual Prompting [51.361977466993345]
Keypoint-based Affordance Guidance for Improvements (KAGI) is a method leveraging rewards shaped by vision-language models (VLMs) for autonomous RL.<n>On real-world manipulation tasks specified by natural language descriptions, KAGI improves the sample efficiency of autonomous RL and enables successful task completion in 30K online fine-tuning steps.
arXiv Detail & Related papers (2024-07-14T21:41:29Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Lifelong Adaptive Machine Learning for Sensor-based Human Activity
Recognition Using Prototypical Networks [0.0]
Continual learning, also known as lifelong learning, is an emerging research topic that has been attracting increasing interest in the field of machine learning.
We build on recent advances in the area of continual machine learning and design a lifelong adaptive learning framework using Prototypical Networks, LAPNet-HAR.
LAPNet-HAR processes sensor-based data streams in a task-free data-incremental fashion and mitigates catastrophic forgetting using experience replay and continual prototype adaptation.
arXiv Detail & Related papers (2022-03-11T00:57:29Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - How to Train Your Robot with Deep Reinforcement Learning; Lessons We've
Learned [111.06812202454364]
We present a number of case studies involving robotic deep RL.
We discuss commonly perceived challenges in deep RL and how they have been addressed in these works.
We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting.
arXiv Detail & Related papers (2021-02-04T22:09:28Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.