Timing Analysis of Embedded Software Updates
- URL: http://arxiv.org/abs/2304.14213v2
- Date: Fri, 7 Jul 2023 07:58:36 GMT
- Title: Timing Analysis of Embedded Software Updates
- Authors: Ahmed El Yaacoub, Luca Mottola, Thiemo Voigt, Philipp R\"ummer
- Abstract summary: We present RETA, a differential timing analysis technique to verify the impact of an update on the execution time of embedded software.
We adapt RETA for integration into aiT, an industrial timing analysis tool, and also develop a complete implementation in a tool called DELTA.
We show that RETA decreases aiT's analysis time by 45% and its memory consumption by 8.9%, whereas removing RETA from DELTA, effectively rendering it a regular timing analysis tool, increases its analysis time by 27%.
- Score: 1.7027593388928293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present RETA (Relative Timing Analysis), a differential timing analysis
technique to verify the impact of an update on the execution time of embedded
software. Timing analysis is computationally expensive and labor intensive.
Software updates render repeating the analysis from scratch a waste of
resources and time, because their impact is inherently confined. To determine
this boundary, in RETA we apply a slicing procedure that identifies all
relevant code segments and a statement categorization that determines how to
analyze each such line of code. We adapt a subset of RETA for integration into
aiT, an industrial timing analysis tool, and also develop a complete
implementation in a tool called DELTA. Based on staple benchmarks and realistic
code updates from official repositories, we test the accuracy by analyzing the
worst-case execution time (WCET) before and after an update, comparing the
measures with the use of the unmodified aiT as well as real executions on
embedded hardware. DELTA returns WCET information that ranges from exactly the
WCET of real hardware to 148% of the new version's measured WCET. With the same
benchmarks, the unmodified aiT estimates are 112% and 149% of the actual
executions; therefore, even when DELTA is pessimistic, an industry-strength
tool such as aiT cannot do better. Crucially, we also show that RETA decreases
aiT's analysis time by 45% and its memory consumption by 8.9%, whereas removing
RETA from DELTA, effectively rendering it a regular timing analysis tool,
increases its analysis time by 27%.
Related papers
- Easing Maintenance of Academic Static Analyzers [0.0]
Mopsa is a static analysis platform that aims at being sound.
This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017.
arXiv Detail & Related papers (2024-07-17T11:29:21Z) - Interval Analysis in Industrial-Scale BMC Software Verifiers: A Case Study [4.024189528766689]
We evaluate whether the computational cost of interval analysis yields significant enough improvements in BMC's performance to justify its use.
Our results show that interval analysis is essential in solving 203 unique benchmarks.
arXiv Detail & Related papers (2024-06-21T16:18:57Z) - TSI-Bench: Benchmarking Time Series Imputation [52.27004336123575]
TSI-Bench is a comprehensive benchmark suite for time series imputation utilizing deep learning techniques.
The TSI-Bench pipeline standardizes experimental settings to enable fair evaluation of imputation algorithms.
TSI-Bench innovatively provides a systematic paradigm to tailor time series forecasting algorithms for imputation purposes.
arXiv Detail & Related papers (2024-06-18T16:07:33Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models [81.27391252152199]
Large language models (LLMs) have achieved impressive performance across various natural language benchmarks.
We propose to automate dataset updating and provide systematic analysis regarding its effectiveness.
There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, and 2) extending strategy that further expands existing samples.
arXiv Detail & Related papers (2024-02-19T07:15:59Z) - E&V: Prompting Large Language Models to Perform Static Analysis by
Pseudo-code Execution and Verification [7.745665775992235]
Large Language Models (LLMs) offer new capabilities for software engineering tasks.
LLMs simulate the execution of pseudo-code, effectively conducting static analysis encoded in the pseudo-code with minimal human effort.
E&V includes a verification process for pseudo-code execution without needing an external oracle.
arXiv Detail & Related papers (2023-12-13T19:31:00Z) - PACE: A Program Analysis Framework for Continuous Performance Prediction [0.0]
PACE is a program analysis framework that provides continuous feedback on the performance impact of pending code updates.
We design performance microbenchmarks by mapping the execution time of functional test cases given a code update.
Our experiments achieved significant performance in predicting code performance, outperforming current state-of-the-art by 75% on neural-represented code stylometry features.
arXiv Detail & Related papers (2023-12-01T20:43:34Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Improving Text Matching in E-Commerce Search with A Rationalizable,
Intervenable and Fast Entity-Based Relevance Model [78.80174696043021]
We propose a novel model called the Entity-Based Relevance Model (EBRM)
The decomposition allows us to use a Cross-encoder QE relevance module for high accuracy.
We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance.
arXiv Detail & Related papers (2023-07-01T15:44:53Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Improving IoT Analytics through Selective Edge Execution [0.0]
We propose to improve the performance of analytics by leveraging edge infrastructure.
We devise an algorithm that enables the IoT devices to execute their routines locally.
We then outsource them to cloudlet servers, only if they predict they will gain a significant performance improvement.
arXiv Detail & Related papers (2020-03-07T15:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.