Temporal logic control of general Markov decision processes by approximate policy refinement

Sofie Haesaert, Sadegh Soudjani, Alessandro Abate

Research output: Contribution to journalConference articlepeer-review

13 Citations (Scopus)
1 Downloads (Pure)

Abstract

The formal verification and controller synthesis for general Markov decision processes (gMDPs) that evolve over uncountable state spaces are computationally hard and thus generally rely on the use of approximate abstractions. In this paper, we contribute to the state of the art of control synthesis for temporal logic properties by computing and quantifying a less conservative gridding of the continuous state space of linear stochastic dynamic systems and by giving a new approach for control synthesis and verification that is robust to the incurred approximation errors. The approximation errors are expressed as both deviations in the outputs of the gMDPs and in the probabilistic transitions.

Original languageEnglish
Pages (from-to)73-78
Number of pages6
JournalIFAC-PapersOnLine
Volume51
Issue number16
DOIs
Publication statusPublished - 1 Jan 2018
Event6th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 2018 - Oxford, United Kingdom
Duration: 11 Jul 201813 Jul 2018

Fingerprint

Dive into the research topics of 'Temporal logic control of general Markov decision processes by approximate policy refinement'. Together they form a unique fingerprint.

Cite this