A hierarchical control framework for autonomous decision-making systems:
Integrating HMDP and MPC
- URL: http://arxiv.org/abs/2401.06833v1
- Date: Fri, 12 Jan 2024 15:25:51 GMT
- Title: A hierarchical control framework for autonomous decision-making systems:
Integrating HMDP and MPC
- Authors: Xue-Fang Wang, Jingjing Jiang, Wen-Hua Chen
- Abstract summary: This paper proposes a comprehensive hierarchical control framework for autonomous decision-making arising in robotics and autonomous systems.
It addresses the intricate interplay between traditional continuous systems dynamics utilized at the low levels for control design and discrete Markov decision processes (MDP) for facilitating high-level decision making.
The proposed framework is applied to develop an autonomous lane changing system for intelligent vehicles.
- Score: 9.74561942059487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a comprehensive hierarchical control framework for
autonomous decision-making arising in robotics and autonomous systems. In a
typical hierarchical control architecture, high-level decision making is often
characterised by discrete state and decision/control sets. However, a rational
decision is usually affected by not only the discrete states of the autonomous
system, but also the underlying continuous dynamics even the evolution of its
operational environment. This paper proposes a holistic and comprehensive
design process and framework for this type of challenging problems, from new
modelling and design problem formulation to control design and stability
analysis. It addresses the intricate interplay between traditional continuous
systems dynamics utilized at the low levels for control design and discrete
Markov decision processes (MDP) for facilitating high-level decision making. We
model the decision making system in complex environments as a hybrid system
consisting of a controlled MDP and autonomous (i.e. uncontrolled) continuous
dynamics. Consequently, the new formulation is called as hybrid Markov decision
process (HMDP). The design problem is formulated with a focus on ensuring both
safety and optimality while taking into account the influence of both the
discrete and continuous state variables of different levels. With the help of
the model predictive control (MPC) concept, a decision maker design scheme is
proposed for the proposed hybrid decision making model. By carefully designing
key ingredients involved in this scheme, it is shown that the recursive
feasibility and stability of the proposed autonomous decision making scheme are
guaranteed. The proposed framework is applied to develop an autonomous lane
changing system for intelligent vehicles.
Related papers
- Decision Making in Changing Environments: Robustness, Query-Based Learning, and Differential Privacy [59.64384863882473]
We study the problem of interactive decision making in which the underlying environment changes over time subject to given constraints.
We propose a framework, which provides an complexity between the complexity and adversarial settings of decision making.
arXiv Detail & Related papers (2025-01-24T21:31:50Z) - Platform-Aware Mission Planning [50.56223680851687]
We introduce the problem of Platform-Aware Mission Planning (PAMP), addressing it in the setting of temporal durative actions.
The first baseline approach amalgamates the mission and platform levels, while the second is based on an abstraction-refinement loop.
We prove the soundness and completeness of the proposed approaches and validate them experimentally.
arXiv Detail & Related papers (2025-01-16T16:20:37Z) - Dynamic Decision Making in Engineering System Design: A Deep Q-Learning
Approach [1.3812010983144802]
We present a framework proposing the use of the Deep Q-learning algorithm to optimize the design of engineering systems.
The goal is to find policies that maximize the output of a simulation model given multiple sources of uncertainties.
We demonstrate the effectiveness of our proposed framework by solving two engineering system design problems in the presence of multiple uncertainties.
arXiv Detail & Related papers (2023-12-28T06:11:34Z) - Correct-by-Construction Control for Stochastic and Uncertain Dynamical
Models via Formal Abstractions [44.99833362998488]
We develop an abstraction framework that can be used to solve this problem under various modeling assumptions.
We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification.
We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model.
arXiv Detail & Related papers (2023-11-16T11:03:54Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Optimal Inspection and Maintenance Planning for Deteriorating Structural
Components through Dynamic Bayesian Networks and Markov Decision Processes [0.0]
Partially Observable Markov Decision Processes (POMDPs) provide a mathematical methodology for optimal control under uncertain action outcomes and observations.
We provide the formulation for developing both infinite and finite horizon POMDPs in a structural reliability context.
Results show that POMDPs achieve substantially lower costs as compared to their counterparts, even for traditional problem settings.
arXiv Detail & Related papers (2020-09-09T20:03:42Z) - Learning High-Level Policies for Model Predictive Control [54.00297896763184]
Model Predictive Control (MPC) provides robust solutions to robot control tasks.
We propose a self-supervised learning algorithm for learning a neural network high-level policy.
We show that our approach can handle situations that are difficult for standard MPC.
arXiv Detail & Related papers (2020-07-20T17:12:34Z) - Optimal by Design: Model-Driven Synthesis of Adaptation Strategies for
Autonomous Systems [9.099295007630484]
We present Optimal by Design (ObD), a framework for model-based requirements-driven synthesis of optimal adaptation strategies for autonomous systems.
ObD proposes a model for the high-level description of the basic elements of self-adaptive systems, namely the system, capabilities, requirements and environment.
Based on those elements, a Markov Decision Process (MDP) is constructed to compute the optimal strategy or the most rewarding system behaviour.
arXiv Detail & Related papers (2020-01-16T12:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.