Decision making is an important skill of autonomous agents, but, in many real- world systems, this task is complicated by uncertainty about the effects of actions and limited sensing capabilities. In particular, we will be concerned with planning problems that optimize how an agent should act given a model of its environment and its task. As agents often do not exist in isolation, attention will be given to the problem of decision making under uncertainty with multiple, interacting agents. Key issues here are how agents should coordinate and whether, what, how and when agents should communicate with each other.
In this tutorial, we will build on the Markov decision process (MDP) and its extensions, such as the multiagent MDP and the partially observable MDP, to formalize such settings. A particular focus will be on groups of agents that share common resources and need to coordinate their usage. The models that we will cover allow a team of agents to coordinate under a variety of different assumptions about what and when agents communicate.