Taxonomy of A Decision Support System for Adaptive Experimental Design
in Field Robotics
- URL: http://arxiv.org/abs/2210.08397v1
- Date: Sat, 15 Oct 2022 23:28:30 GMT
- Title: Taxonomy of A Decision Support System for Adaptive Experimental Design
in Field Robotics
- Authors: Jason M. Gregory, Sarah Al-Hussaini, Ali-akbar Agha-mohammadi,
Satyandra K. Gupta
- Abstract summary: We propose a Decision Support System (DSS) to amplify the human's decision-making abilities and enable principled decision-making in field experiments.
We construct and present our taxonomy using examples and trends from DSS literature, including works involving artificial intelligence and Intelligent DSSs.
- Score: 19.474298062145003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experimental design in field robotics is an adaptive human-in-the-loop
decision-making process in which an experimenter learns about system
performance and limitations through interactions with a robot in the form of
constructed experiments. This can be challenging because of system complexity,
the need to operate in unstructured environments, and the competing objectives
of maximizing information gain while simultaneously minimizing experimental
costs. Based on the successes in other domains, we propose the use of a
Decision Support System (DSS) to amplify the human's decision-making abilities,
overcome their inherent shortcomings, and enable principled decision-making in
field experiments. In this work, we propose common terminology and a six-stage
taxonomy of DSSs specifically for adaptive experimental design of more
informative tests and reduced experimental costs. We construct and present our
taxonomy using examples and trends from DSS literature, including works
involving artificial intelligence and Intelligent DSSs. Finally, we identify
critical technical gaps and opportunities for future research to direct the
scientific community in the pursuit of next-generation DSSs for experimental
design.
Related papers
- Amortized Bayesian Experimental Design for Decision-Making [22.250312394159945]
We present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility.
We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design.
We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.
arXiv Detail & Related papers (2024-11-04T13:06:46Z) - DISCOVERYWORLD: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents [49.74065769505137]
We introduce DISCOVERYWORLD, the first virtual environment for developing and benchmarking an agent's ability to perform complete cycles of novel scientific discovery.
It includes 120 different challenge tasks spanning eight topics each with three levels of difficulty and several parametric variations.
We find that strong baseline agents, that perform well in prior published environments, struggle on most DISCOVERYWORLD tasks.
arXiv Detail & Related papers (2024-06-10T20:08:44Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics [54.172707311728885]
We introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED)
Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED.
Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time.
arXiv Detail & Related papers (2023-06-03T06:19:20Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Adaptive Experimental Design and Counterfactual Inference [20.666734673282495]
This paper shares lessons learned regarding the challenges and pitfalls of naively using adaptive experimentation systems in industrial settings.
We developed an adaptive experimental design framework for counterfactual inference based on these experiences.
arXiv Detail & Related papers (2022-10-25T22:29:16Z) - Computational Experiments: Past, Present and Future [29.515830983306966]
computational experiments have emerged as a new method for quantitative analysis of CPSS.
This paper outlines computational experiments from several key aspects, including origin, characteristics, methodological framework, key technologies, and some typical applications.
arXiv Detail & Related papers (2022-02-28T11:18:17Z) - Implicit Deep Adaptive Design: Policy-Based Experimental Design without
Likelihoods [24.50829695870901]
implicit Deep Adaptive Design (iDAD) is a new method for performing adaptive experiments in real-time with implicit models.
iDAD amortizes the cost of Bayesian optimal experimental design (BOED) by learning a design policy network upfront.
arXiv Detail & Related papers (2021-11-03T16:24:05Z) - Integrated Benchmarking and Design for Reproducible and Accessible
Evaluation of Robotic Agents [61.36681529571202]
We describe a new concept for reproducible robotics research that integrates development and benchmarking.
One of the central components of this setup is the Duckietown Autolab, a standardized setup that is itself relatively low-cost and reproducible.
We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs.
arXiv Detail & Related papers (2020-09-09T15:31:29Z) - A user-centered approach to designing an experimental laboratory data
platform [0.0]
We take a user-centered approach to understand what essential elements of design and functionality researchers want in an experimental data platform.
We find that having the capability to contextualize rich, complex experimental datasets is the primary user requirement.
arXiv Detail & Related papers (2020-07-28T19:26:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.