On uniformly nearly-optimal Markov strategies

J. Wal, van der

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review


    In this paper the following result is proved. In any total reward countable state Markov decision process a Markov strategy IT exists which is uniformly nearly-optimal in the following sense: v(i,π,) ≥ v*(i) − ε − εu*(i) for any initial state i. Here v* denotes the value function of the process and u* denotes the value of the process if all negative rewards are neglected.
    Original languageEnglish
    Title of host publicationOperations Research Proceedings 1982 (Papers of the 11th Annual Meeting of DGOR, Frankfurt am Main, Germany, September 22-24, 1982)
    EditorsW. Bühler, B. Fleischmann, K.P. Schuster, L. Streitferdt, H. Zander
    Place of PublicationBerlin
    ISBN (Electronic)978-3-642-68997-0
    ISBN (Print)3-540-12239-7, 978-3-540-12239-5
    Publication statusPublished - 1983

    Publication series

    NameOperations Research Proceedings (ORP)


    Dive into the research topics of 'On uniformly nearly-optimal Markov strategies'. Together they form a unique fingerprint.

    Cite this