Comparing Markov chains : combining aggregation and precedence relations applied to sets of states

A. Busic, I.M.H. Vliegen, A. Scheller-Wolf

Research output: Book/ReportReportAcademic

29 Downloads (Pure)

Abstract

Numerical methods for solving Markov chains are in general inefficient if the state space of the chain is very large (or infinite) and lacking a simple repeating structure. One alternative to solving such chains is to construct models that are simple to analyze and that provide bounds for a reward function of interest. We present a new bounding method for Markov chains inspired by Markov reward theory; our method constructs bounds by redirecting selected sets of transitions, facilitating an intuitive interpretation of the modifications on the original system. We show that our method is compatible with strong aggregation of Markov chains; thus we can obtain bounds for the initial chain by analyzing a much smaller chain. We illustrate our method on a problem of order fill rates for an inventory system of service tools.
Original languageEnglish
Place of PublicationEindhoven
PublisherEurandom
Number of pages34
Publication statusPublished - 2009

Publication series

NameReport Eurandom
Volume2009013
ISSN (Print)1389-2355

Fingerprint

Dive into the research topics of 'Comparing Markov chains : combining aggregation and precedence relations applied to sets of states'. Together they form a unique fingerprint.

Cite this