Scenario-Based Field Testing of Drone Missions
- URL: http://arxiv.org/abs/2407.08359v1
- Date: Thu, 11 Jul 2024 10:12:13 GMT
- Title: Scenario-Based Field Testing of Drone Missions
- Authors: Michael Vierhauser, Kristof Meixner, Stefan Biffl,
- Abstract summary: This paper identifies requirements for field testing of drone missions.
It introduces the Field Testing Scenario Management (FiTS) approach for adaptive field testing guidance.
FiTS shall leverage concepts from scenario-based requirements engineering and Behavior-Driven Development to define structured and reusable test scenarios.
- Score: 6.219782508946943
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Testing and validating Cyber-Physical Systems (CPSs) in the aerospace domain, such as field testing of drone rescue missions, poses challenges due to volatile mission environments, such as weather conditions. While testing processes and methodologies are well established, structured guidance and execution support for field tests are still weak. This paper identifies requirements for field testing of drone missions, and introduces the Field Testing Scenario Management (FiTS) approach for adaptive field testing guidance. FiTS aims to provide sufficient guidance for field testers as a foundation for efficient data collection to facilitate quality assurance and iterative improvement of field tests and CPSs. FiTS shall leverage concepts from scenario-based requirements engineering and Behavior-Driven Development to define structured and reusable test scenarios, with dedicated tasks and responsibilities for role-specific guidance. We evaluate FiTS by (i) applying it to three use cases for a search-and-rescue drone application to demonstrate feasibility and (ii) interviews with experienced drone developers to assess its usefulness and collect further requirements. The study results indicate FiTS to be feasible and useful to facilitate drone field testing and data analysis
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - Generating Test Scenarios from NL Requirements using Retrieval-Augmented LLMs: An Industrial Study [5.179738379203527]
This paper presents an automated approach (RAGTAG) for test scenario generation using Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs)
We evaluate RAGTAG on two industrial projects from Austrian Post with bilingual requirements in German and English.
arXiv Detail & Related papers (2024-04-19T10:27:40Z) - Runtime Verification and Field-based Testing for ROS-based Robotic Systems [8.675312581079039]
No clear guidance exists for architecting ROS-based systems to enable runtime verification and field-based testing.
This paper aims to fill this gap by providing guidelines to help developers and quality assurance (QA) teams develop, verify, or test their robots in the field.
arXiv Detail & Related papers (2024-04-17T15:52:29Z) - Automated System-level Testing of Unmanned Aerial Systems [2.2249176072603634]
A major requirement of international safety standards is to perform rigorous system-level testing of avionics software systems.
The proposed approach (AITester) utilizes model-based testing and artificial intelligence (AI) techniques to automatically generate, execute, and evaluate various test scenarios.
arXiv Detail & Related papers (2024-03-23T14:47:26Z) - Better Practices for Domain Adaptation [62.70267990659201]
Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
arXiv Detail & Related papers (2023-09-07T17:44:18Z) - DroneReqValidator: Facilitating High Fidelity Simulation Testing for
Uncrewed Aerial Systems Developers [8.290044674335473]
sUAS developers aim to validate the reliability and safety of their applications through simulation testing.
The dynamic nature of the real-world environment causes unique software faults that may only be revealed through field testing.
DroneReqValidator (DRV) offers a comprehensive small Unmanned Aerial Vehicle (sUAV) simulation ecosystem.
arXiv Detail & Related papers (2023-07-31T22:13:57Z) - A Requirements-Driven Platform for Validating Field Operations of Small
Uncrewed Aerial Vehicles [48.67061953896227]
DroneReqValidator (DRV) allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment.
The DRV Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations.
arXiv Detail & Related papers (2023-07-01T02:03:49Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Socratic Pretraining: Question-Driven Pretraining for Controllable
Summarization [89.04537372465612]
Socratic pretraining is a question-driven, unsupervised pretraining objective designed to improve controllability in summarization tasks.
Our results show that Socratic pretraining cuts task-specific labeled data requirements in half.
arXiv Detail & Related papers (2022-12-20T17:27:10Z) - Rearrangement: A Challenge for Embodied AI [229.8891614821016]
We describe a framework for research and evaluation in Embodied AI.
Our proposal is based on a canonical task: Rearrangement.
We present experimental testbeds of rearrangement scenarios in four different simulation environments.
arXiv Detail & Related papers (2020-11-03T19:42:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.