Testing Resource Isolation for System-on-Chip Architectures
- URL: http://arxiv.org/abs/2403.18720v1
- Date: Wed, 27 Mar 2024 16:11:23 GMT
- Title: Testing Resource Isolation for System-on-Chip Architectures
- Authors: Philippe Ledent, Radu Mateescu, Wendelin Serwe,
- Abstract summary: Ensuring resource isolation at the hardware level is a crucial step towards more security inside the Internet of Things.
We illustrate the modeling aspects in test generation for resource isolation, namely modeling the behavior and expressing the intended test scenario.
- Score: 0.9176056742068811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring resource isolation at the hardware level is a crucial step towards more security inside the Internet of Things. Even though there is still no generally accepted technique to generate appropriate tests, it became clear that tests should be generated at the system level. In this paper, we illustrate the modeling aspects in test generation for resource isolation, namely modeling the behavior and expressing the intended test scenario. We present both aspects using the industrial standard PSS and an academic approach based on conformance testing.
Related papers
- Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [85.51252685938564]
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML)
As with other ML models, large language models (LLMs) are prone to make incorrect predictions, hallucinate'' by fabricating claims, or simply generate low-quality output for a given input.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - Test-Time Domain Generalization for Face Anti-Spoofing [60.94384914275116]
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks.
We introduce a novel Test-Time Domain Generalization framework for FAS, which leverages the testing data to boost the model's generalizability.
Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space.
arXiv Detail & Related papers (2024-03-28T11:50:23Z) - Coupled Requirements-driven Testing of CPS: From Simulation To Reality [5.7736484832934325]
Failures in safety-critical Cyber-Physical Systems (CPS) can lead to severe incidents impacting physical infrastructure or even harming humans.
Current simulation and field testing practices, particularly in the domain of small Unmanned Aerial Systems (sUAS), are ad-hoc and lack a thorough, structured testing process.
We have developed an initial framework for validating CPS, specifically focusing on sUAS and robotic applications.
arXiv Detail & Related papers (2024-03-24T20:32:12Z) - A Requirements-Driven Platform for Validating Field Operations of Small
Uncrewed Aerial Vehicles [48.67061953896227]
DroneReqValidator (DRV) allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment.
The DRV Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations.
arXiv Detail & Related papers (2023-07-01T02:03:49Z) - Zero-shot Model Diagnosis [80.36063332820568]
A common approach to evaluate deep learning models is to build a labeled test set with attributes of interest and assess how well it performs.
This paper argues the case that Zero-shot Model Diagnosis (ZOOM) is possible without the need for a test set nor labeling.
arXiv Detail & Related papers (2023-03-27T17:59:33Z) - Overview of Test Coverage Criteria for Test Case Generation from Finite
State Machines Modelled as Directed Graphs [0.12891210250935145]
Test Coverage criteria are an essential concept for test engineers when generating the test cases from a System Under Test model.
Test Coverage criteria define the number of actions or combinations by which a system is tested.
This study summarized all commonly used test coverage criteria for Finite State Machines and discussed them regarding their subsumption, equivalence, or non-comparability.
arXiv Detail & Related papers (2022-03-17T20:30:14Z) - Complete Agent-driven Model-based System Testing for Autonomous Systems [0.0]
A novel approach to testing complex autonomous transportation systems is described.
It is intended to mitigate some of the most critical problems regarding verification and validation.
arXiv Detail & Related papers (2021-10-25T01:55:24Z) - Data Driven Testing of Cyber Physical Systems [12.93632948681342]
We propose an approach to automatically generate fault-revealing test cases for CPS.
Data collected from an application managing a smart building have been used to learn models of the environment.
arXiv Detail & Related papers (2021-02-23T04:55:10Z) - Test and Evaluation Framework for Multi-Agent Systems of Autonomous
Intelligent Agents [0.0]
We consider the challenges of developing a unifying test and evaluation framework for complex ensembles of cyber-physical systems with embedded artificial intelligence.
We propose a framework that incorporates test and evaluation throughout not only the development life cycle, but continues into operation as the system learns and adapts.
arXiv Detail & Related papers (2021-01-25T21:42:27Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.