In this paper the possibility is investigated of using aggregation in the action space for some Markov decision processes of inventory control type. For the standard (s,S) inventory control model the policy improvement procedure can be executed in a very efficient way, therefore aggregation in the state space is not of much use. However, in situations where the decisions have some aftereffect and, hence, the old decision has to be incorporated in the state, it might be rewarding to aggregate actions. Some variants for aggregation and disaggregation are formulated and analyzed. Numerical evidence is presented.