The method of value oriented successive approximations for the average reward Markov decision process

J. Wal, van der

Research output: Book/ReportReportAcademic

55 Downloads (Pure)

Abstract

In this paper we consider the Markov decision process with finite state and action spaces at the criterion of average reward per unit time. We will consider the method of value oriented successive approximations which has been extensively studied by Van Nunen for the total reward case. Under various conditions which guarantee the gain of the process to be independent of the starting state and a strong aperiodicity assumption we show that the method converges and produces e-optimal policies.
Original languageEnglish
Place of PublicationEindhoven
PublisherTechnische Hogeschool Eindhoven
Number of pages28
Publication statusPublished - 1979

Publication series

NameMemorandum COSOR
Volume7907
ISSN (Print)0926-4493

Fingerprint

Dive into the research topics of 'The method of value oriented successive approximations for the average reward Markov decision process'. Together they form a unique fingerprint.

Cite this