In this paper, we consider solving discounted Markov Decision Processes (MDPs) under the constraint that the resulting policy is stabilizing. In practice MDPs are solved based on some form of policy approximation. We will leverage recent results proposing to use Model Predictive Control (MPC) as a structured approximator in the context of Reinforcement Learning, which makes it possible to introduce stability requirements directly inside the MPC-based policy. This will restrict the solution of the MDP to stabilizing policies by construction. Because the stability theory for MPC is most mature for the undiscounted MPC case, we will first show in this paper that stable discounted MDPs can be reformulated as undiscounted ones. This observation will entail that the undiscounted MPC-based policy with stability guarantees will produce the optimal policy for the discounted MDP if it is stable, and the best stabilizing policy otherwise.

Stability-constrained Markov Decision Processes using MPC

Zanon M.
;
2022-01-01

Abstract

In this paper, we consider solving discounted Markov Decision Processes (MDPs) under the constraint that the resulting policy is stabilizing. In practice MDPs are solved based on some form of policy approximation. We will leverage recent results proposing to use Model Predictive Control (MPC) as a structured approximator in the context of Reinforcement Learning, which makes it possible to introduce stability requirements directly inside the MPC-based policy. This will restrict the solution of the MDP to stabilizing policies by construction. Because the stability theory for MPC is most mature for the undiscounted MPC case, we will first show in this paper that stable discounted MDPs can be reformulated as undiscounted ones. This observation will entail that the undiscounted MPC-based policy with stability guarantees will produce the optimal policy for the discounted MDP if it is stable, and the best stabilizing policy otherwise.
2022
Markov Decision Processes, Model Predictive Control, StabilitySafe reinforcement learning
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0005109822002527-main.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Nessuna licenza
Dimensione 919.98 kB
Formato Adobe PDF
919.98 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
undiscounted_final.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 946.49 kB
Formato Adobe PDF
946.49 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/21319
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
social impact