For all its successes, Reinforcement Learning (RL) still struggles to deliver formal guarantees on the closed-loop behavior of the learned policy. Among other things, guaranteeing the safety of RL with respect to safety-critical systems is a very active research topic. Some recent contributions propose to rely on projections of the inputs delivered by the learned policy into a safe set, ensuring that the system safety is never jeopardized. Unfortunately, it is unclear whether this operation can be performed without disrupting the learning process. This paper addresses this issue. The problem is analysed in the context of Q-learning and policy gradient techniques. We show that the projection approach is generally disruptive in the context of Q-learning though a simple alternative solves the issue, while simple corrections can be used in the context of policy gradient methods in order to ensure that the policy gradients are unbiased. The proposed results extend to safe projections based on robust MPC techniques.

Safe reinforcement learning via projection on a safe set: How to achieve optimality?

Zanon M.;Bemporad A.
2020-01-01

Abstract

For all its successes, Reinforcement Learning (RL) still struggles to deliver formal guarantees on the closed-loop behavior of the learned policy. Among other things, guaranteeing the safety of RL with respect to safety-critical systems is a very active research topic. Some recent contributions propose to rely on projections of the inputs delivered by the learned policy into a safe set, ensuring that the system safety is never jeopardized. Unfortunately, it is unclear whether this operation can be performed without disrupting the learning process. This paper addresses this issue. The problem is analysed in the context of Q-learning and policy gradient techniques. We show that the projection approach is generally disruptive in the context of Q-learning though a simple alternative solves the issue, while simple corrections can be used in the context of policy gradient methods in order to ensure that the policy gradients are unbiased. The proposed results extend to safe projections based on robust MPC techniques.
2020
Robust MPC
Safe projection
Safe reinforcement learning
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S2405896320329360-main.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Nessuna licenza
Dimensione 623.84 kB
Formato Adobe PDF
623.84 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
SafeRLProjectionFinal.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 463.34 kB
Formato Adobe PDF
463.34 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/18945
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
social impact