Discounted Markov games : generalized policy iteration method

J. Wal, van der

    Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

    19 Citaten (Scopus)
    1 Downloads (Pure)


    In this paper, we consider two-person zero-sum discounted Markov games with finite state and action spaces. We show that the Newton-Raphson or policy iteration method as presented by Pollats-chek and Avi-Itzhak does not necessarily converge, contradicting a proof of Rao, Chandrasekaran, and Nair. Moreover, a set of successive approximation algorithms is presented of which Shapley''s method and a total-expected-rewards version of Hoffman and Karp''s method are the extreme elements.
    Originele taal-2Engels
    Pagina's (van-tot)125-138
    Aantal pagina's14
    TijdschriftJournal of Optimization Theory and Applications
    Nummer van het tijdschrift1
    StatusGepubliceerd - 1978


    Duik in de onderzoeksthema's van 'Discounted Markov games : generalized policy iteration method'. Samen vormen ze een unieke vingerafdruk.

    Citeer dit