Standard value function approaches to finding policies for Partially
Observable Markov Decision Processes (POMDPs) are generally considered to be
intractable for large models. The intractability of these algorithms is to a
large extent a consequence of computing an exact, optimal policy over the
entire belief space. However, in real-world POMDP problems, com