Distributed Model Shaping for Scaling to Decentralized POMDPs with Hundreds of Agents

   page       BibTeX_logo.png   
Prasanna Velagapudi, Pradeep Varakantham, Katia Sycara, Paul Scerri
Liz Sonenberg, Peter Stone, Kagan Tomer, Pinar Yolum (eds.)
10th International Joint Conference "Autonomous Agents & Multi-Agent Systems" (AAMAS 2011) , pages 955–962
IFAAMAS, Taipei, Taiwan
May 2011

The use of distributed POMDPs for cooperative teams has been severely limited by the incredibly large joint policy-space that results from combining the policy-spaces of the individual agents. However, much of the computational cost of exploring the entire joint policy space can be avoided by observing that in many domains important interactions between agents occur in a relatively small set of scenarios, previously defined as coordination locales (CLs). Moreover, even when numerous interactions might occur, given a set of individual policies there are relatively few actual interactions. Exploiting this observation and building on an existing model shaping algorithm, this paper presents D-TREMOR, an algorithm in which cooperative agents iteratively generate individual policies, identify and communicate possible interactions between their policies, shape their models based on this information and generate new policies. D-TREMOR has three properties that jointly distinguish it from previous DEC-POMDP work:
(1) it is completely distributed;
(2) it is scalable (allowing 100 agents to compute a “good” joint policy in under 6 hours) and
(3) it has low communication overhead. 

D-TREMOR complements these traits with the following key contributions, which ensure improved scalability and solution quality:
(a) techniques to ensure convergence;
(b) faster approaches to detect and evaluate CLs;
(c) heuristics to capture dependencies between CLs; and
(d) novel shaping heuristics to aggregate effects of CLs. 

While the resulting policies are not globally optimal, empirical results show that agents have policies that effectively manage uncertainty and the joint policy is better than policies generated by independent solvers.

keywordsDEC-POMDP, Uncertainty, Multi-agent systems
origin event