Towards Efficient Record and Replay: A Case Study in WeChat
- URL: http://arxiv.org/abs/2308.06657v2
- Date: Fri, 25 Aug 2023 09:14:35 GMT
- Title: Towards Efficient Record and Replay: A Case Study in WeChat
- Authors: Sidong Feng, Haochuan Lu, Ting Xiong, Yuetang Deng, Chunyang Chen
- Abstract summary: We introduce WeReplay, a lightweight image-based approach that dynamically adjusts inter-event time based on the GUI rendering state.
Our evaluation shows that our model achieves 92.1% precision and 93.3% recall in discerning GUI rendering states in the WeChat app.
- Score: 24.659458527088773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: WeChat, a widely-used messenger app boasting over 1 billion monthly active
users, requires effective app quality assurance for its complex features.
Record-and-replay tools are crucial in achieving this goal. Despite the
extensive development of these tools, the impact of waiting time between replay
events has been largely overlooked. On one hand, a long waiting time for
executing replay events on fully-rendered GUIs slows down the process. On the
other hand, a short waiting time can lead to events executing on
partially-rendered GUIs, negatively affecting replay effectiveness. An optimal
waiting time should strike a balance between effectiveness and efficiency. We
introduce WeReplay, a lightweight image-based approach that dynamically adjusts
inter-event time based on the GUI rendering state. Given the real-time
streaming on the GUI, WeReplay employs a deep learning model to infer the
rendering state and synchronize with the replaying tool, scheduling the next
event when the GUI is fully rendered. Our evaluation shows that our model
achieves 92.1% precision and 93.3% recall in discerning GUI rendering states in
the WeChat app. Through assessing the performance in replaying 23 common WeChat
usage scenarios, WeReplay successfully replays all scenarios on the same and
different devices more efficiently than the state-of-the-practice baselines.
Related papers
- TIM: A Time Interval Machine for Audio-Visual Action Recognition [64.24297230981168]
We address the interplay between the two modalities in long videos by explicitly modelling the temporal extents of audio and visual events.
We propose the Time Interval Machine (TIM) where a modality-specific time interval poses as a query to a transformer encoder.
We test TIM on three long audio-visual video datasets: EPIC-KITCHENS, Perception Test, and AVE.
arXiv Detail & Related papers (2024-04-08T14:30:42Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - Graph-based Asynchronous Event Processing for Rapid Object Recognition [59.112755601918074]
Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
arXiv Detail & Related papers (2023-08-28T08:59:57Z) - Curious Replay for Model-based Adaptation [3.9981390090442686]
We present Curious Replay, a form of prioritized experience replay tailored to model-based agents.
Agents using Curious Replay exhibit improved performance in an exploration paradigm inspired by animal behavior.
DreamerV3 with Curious Replay surpasses state-of-the-art performance on the Crafter benchmark.
arXiv Detail & Related papers (2023-06-28T05:34:53Z) - Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using
RGB and Poses [19.362802419289526]
We show how a learning-based relocalization system can achieve the same accuracy in less than 5 minutes.
Our approach is up to 300x faster in mapping than state-of-the-art scene coordinate regression.
arXiv Detail & Related papers (2023-05-23T13:38:01Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Event Transformer+. A multi-purpose solution for efficient event data
processing [13.648678472312374]
Event cameras record sparse illumination changes with high temporal resolution and high dynamic range.
Current methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms.
We propose Event Transformer+, that improves our seminal work EvT with a refined patch-based event representation.
arXiv Detail & Related papers (2022-11-22T12:28:37Z) - Towards cumulative race time regression in sports: I3D ConvNet transfer
learning in ultra-distance running events [1.4859458229776121]
We propose regressing an ultra-distance runner cumulative race time (CRT) by using only a few seconds of footage as input.
We show that the resulting neural network can provide a remarkable performance for short input footage.
arXiv Detail & Related papers (2022-08-23T20:53:01Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - Parallel Actors and Learners: A Framework for Generating Scalable RL
Implementations [14.432131909590824]
Reinforcement Learning (RL) has achieved significant success in application domains such as robotics, games, health care and others.
Current implementations exhibit poor performance due to challenges such as irregular memory accesses and synchronization overheads.
We propose a framework for generating scalable reinforcement learning implementations on multicore systems.
arXiv Detail & Related papers (2021-10-03T21:00:53Z) - AdaFuse: Adaptive Temporal Fusion Network for Efficient Action
Recognition [68.70214388982545]
Temporal modelling is the key for efficient video action recognition.
We introduce an adaptive temporal fusion network, called AdaFuse, that fuses channels from current and past feature maps.
Our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods.
arXiv Detail & Related papers (2021-02-10T23:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.