In this paper we consider the Markov decision process with finite state and action spaces at the criterion of average reward per unit time. We will consider the method of value oriented successive approximations which has been extensively studied by Van Nunen for the total reward case. Under various conditions which guarantee the gain of the process to be independent of the starting state and a strong aperiodicity assumption we show that the method converges and produces e-optimal policies.
Name | Memorandum COSOR |
---|
Volume | 7907 |
---|
ISSN (Print) | 0926-4493 |
---|