In this paper we will consider several variants of the standard successive approximation technique for Markov decision processes. It will be shown how these variants can be generated by stopping times.
Furthermore it will be demonstrated how this class of techniques can be extended to a class of value oriented techniques. This latter class contains as extreme elements several variants of Howard's policy iteration method.
For all methods presented extrapolations are given in the form of MacQueen's upper and lower bounds.