Replaces Memorandum COSO 74-12.
In this paper we study the problem of the optimal stopping of a Markov chain with a countable state space. In each state i the controller receives a reward r(i) if he stops the process or he must pay the cost c(i) otherwise.
We show that, under the condition that there exists an optimal stopping rules the policy iteration method, introduced by Howard, produces a sequence of stopping rules for which the expected return converges to the value function.
For random walks on the integers with a special reward and cost structure, we show that the policy iteration method gives the solution of a discrete two point boundary value problem with a free boundary. We give a simple algorithm for the computation of the optimal stopping rule.