(Extended version of Memorandum COSOR 81-11)
This paper deals with total reward Markov decision processes with countable state space and extends various results on (nearly-)optimal stationary strategies. Strauch proved that if the rewards are nonpositive and the action space is finite then an optimal stationary strategy exists. For the case of nonnegative rewards Ornstein proved the existence of a stationary strategy f which is uniformly nearly-optimal in the multiplicative sense:
v(f) = (1 - e) v* .
Van der Wal showed that if the action space is finite then for each initial state a stationary nearly-optimal strategy exists. These partial results are connected and extended in the following theorem. If in each state where the value is nonpositive a conserving action exists then there exists a stationary strategy f which is uniformly nearly optimal in the following sense:
v(f) = v* - eu* , where u * is the value of the problem if only the positive rewards are counted.
Further the following result is established: if an optimal strategy exists then also an optimal stationary.strategy exists. This generalizes results of Strauch and Ornstein for the negative and positive dynamic programming cases respectively.
Name | Memorandum COSOR |
---|
Volume | 8114 |
---|
ISSN (Print) | 0926-4493 |
---|