Temporal logic control of general Markov decision processes by approximate policy refinement

Sofie Haesaert, Sadegh Soudjani, Alessandro Abate

Research output: Contribution to journalArticleAcademicpeer-review

40 Downloads (Pure)


The formal verification and controller synthesis for Markov decision processes that evolve over uncountable state spaces are computationally hard and thus generally rely on the use of approximations. In this work, we consider the correct-by-design control of general Markov decision processes (gMDPs) with respect to temporal logic properties by leveraging approximate probabilistic relations between the original model and its abstraction. We newly work with a robust satisfaction for the construction and verification of control strategies, which allows for both deviations in the outputs of the gMDPs and in the probabilistic transitions. The computation is done over the reduced or abstracted models, such that when a property is robustly satisfied on the abstract model, it is also satisfied on the original model with respect to a refined control strategy.
Original languageEnglish
Article number1712.07622v1
Number of pages23
JournalarXiv.org, e-Print Archive, Physics
Publication statusPublished - 20 Dec 2017

Bibliographical note

22 pages, 3 figures, submitted to the 24th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2018


  • cs.SY
  • cs.LO
  • math.OC
  • 93E03, 93E20, 68W25


Dive into the research topics of 'Temporal logic control of general Markov decision processes by approximate policy refinement'. Together they form a unique fingerprint.

Cite this