Practical Non-Intrusive GUI Exploration Testing with Visual-based
Robotic Arms
- URL: http://arxiv.org/abs/2312.10655v1
- Date: Sun, 17 Dec 2023 09:05:39 GMT
- Title: Practical Non-Intrusive GUI Exploration Testing with Visual-based
Robotic Arms
- Authors: Shengcheng Yu, Chunrong Fang, Mingzhe Du, Yuchen Ling, Zhenyu Chen,
Zhendong Su
- Abstract summary: We propose a practical non-intrusive GUI testing framework with visual robotic arms.
RoboTest integrates novel GUI screen and widget detection algorithms, adaptive to detecting screens of different sizes.
We evaluate RoboTest with 20 mobile apps, with a case study on an embedded system.
- Score: 14.3266199543725
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: GUI testing is significant in the SE community. Most existing frameworks are
intrusive and only support some specific platforms. With the development of
distinct scenarios, diverse embedded systems or customized operating systems on
different devices do not support existing intrusive GUI testing frameworks.
Some approaches adopt robotic arms to replace the interface invoking of mobile
apps under test and use computer vision technologies to identify GUI elements.
However, some challenges are unsolved. First, existing approaches assume that
GUI screens are fixed so that they cannot be adapted to diverse systems with
different screen conditions. Second, existing approaches use XY-plane robotic
arms, which cannot flexibly simulate testing operations. Third, existing
approaches ignore compatibility bugs and only focus on crash bugs. A more
practical approach is required for the non-intrusive scenario. We propose a
practical non-intrusive GUI testing framework with visual robotic arms.
RoboTest integrates novel GUI screen and widget detection algorithms, adaptive
to detecting screens of different sizes and then to extracting GUI widgets from
the detected screens. Then, a set of testing operations is applied with a 4-DOF
robotic arm, which effectively and flexibly simulates human testing operations.
During app exploration, RoboTest integrates the Principle of Proximity-guided
exploration strategy, choosing close widgets of the previous targets to reduce
robotic arm movement overhead and improve exploration efficiency. RoboTest can
effectively detect some compatibility bugs beyond crash bugs with a GUI
comparison on different devices of the same test operations. We evaluate
RoboTest with 20 mobile apps, with a case study on an embedded system. The
results show that RoboTest can effectively, efficiently, and generally explore
AUTs to find bugs and reduce exploration time overhead.
Related papers
- Vision-driven Automated Mobile GUI Testing via Multimodal Large Language Model [27.97964877860671]
This paper proposes a vision-driven automated GUI testing approach to detect non-crash functional bugs with Multimodal Large Language Models.
It begins by extracting GUI text information and aligning it with screenshots to form a vision prompt, enabling MLLM to understand GUI context.
VisionDroid identifies 29 new bugs on Google Play, of which 19 have been confirmed and fixed.
arXiv Detail & Related papers (2024-07-03T11:58:09Z) - ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning [74.58666091522198]
We present a framework for intuitive robot programming by non-experts.
We leverage natural language prompts and contextual information from the Robot Operating System (ROS)
Our system integrates large language models (LLMs), enabling non-experts to articulate task requirements to the system through a chat interface.
arXiv Detail & Related papers (2024-06-28T08:28:38Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Vision-Based Mobile App GUI Testing: A Survey [29.042723121518765]
Vision-based mobile app GUI testing approaches emerged with the development of computer vision technologies.
We provide a comprehensive investigation of the state-of-the-art techniques on 271 papers, among which 92 are vision-based studies.
arXiv Detail & Related papers (2023-10-20T14:04:04Z) - NiCro: Purely Vision-based, Non-intrusive Cross-Device and
Cross-Platform GUI Testing [19.462053492572142]
We propose a non-intrusive cross-device and cross-platform system NiCro.
NiCro uses the state-of-the-art GUI widget detector to detect widgets from GUI images and then analyses a set of comprehensive information to match the widgets across diverse devices.
At the system level, NiCro can interact with a virtual device farm and a robotic arm system to perform cross-device, cross-platform testing non-intrusively.
arXiv Detail & Related papers (2023-05-24T01:19:05Z) - Taming Android Fragmentation through Lightweight Crowdsourced Testing [9.752084629147854]
We propose a novel, lightweight, crowdsourced testing approach, LAZYCOW, to tame Android fragmentation through crowdsourced efforts.
Experimental results on thousands of test cases on real-world Android devices show that LAZYCOW is effective in automatically identifying and verifying API-induced compatibility issues.
arXiv Detail & Related papers (2023-04-10T01:37:16Z) - Effective, Platform-Independent GUI Testing via Image Embedding and Reinforcement Learning [15.458315113767686]
We propose PIRLTest, an effective platform-independent approach for app testing.
It utilizes computer vision and reinforcement learning techniques in a novel, synergistic manner for automated testing.
PILTest explores apps with the guidance of a curiosity-driven strategy, which uses a Q-network to estimate the values of specific state-action pairs.
arXiv Detail & Related papers (2022-08-19T01:51:16Z) - ReverseORC: Reverse Engineering of Resizable User Interface Layouts with
OR-Constraints [47.164878414034234]
ReverseORC is a novel reverse engineering (RE) approach to discover diverse layout types and their dynamic resizing behaviours.
It can create specifications that replicate even some non-standard layout managers with complex dynamic layout behaviours.
It can be used to detect and fix problems in legacy UIs, extend UIs with enhanced layout behaviours, and support the creation of flexible UI layouts.
arXiv Detail & Related papers (2022-02-23T13:57:25Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and
Diagnosing Dialogue Systems [107.35174238206525]
ConvLab-2 is an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models.
The analysis tool presents rich statistics and summarizes common mistakes from simulated dialogues.
The interactive tool allows developers to diagnose an assembled dialogue system by interacting with the system and modifying the output of each system component.
arXiv Detail & Related papers (2020-02-12T04:31:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.