Samenvatting
The synthesis of controllers guaranteeing linear temporal logic specifications on partially observable Markov decision processes (POMDP) via their belief models causes computational issues due to the continuous spaces. In this work, we construct a finite-state abstraction on which a control policy is synthesized and refined back to the original belief model. We introduce a new notion of label-based approximate stochastic simulation to quantify the deviation between belief models. We develop a robust synthesis methodology that yields a lower bound on the satisfaction probability, by compensating for deviations a priori, and that utilizes a less conservative control refinement.
Originele taal-2 | Engels |
---|---|
Pagina's (van-tot) | 271-276 |
Aantal pagina's | 6 |
Tijdschrift | IFAC-PapersOnLine |
Volume | 51 |
Nummer van het tijdschrift | 16 |
DOI's | |
Status | Gepubliceerd - 1 jan. 2018 |
Evenement | 6th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 2018 - Oxford, Verenigd Koninkrijk Duur: 11 jul. 2018 → 13 jul. 2018 |