Seminar Series

Learning to Shape Interactions: Models and Algorithms

We are motivated by the problem of building interactively intelligent robots. One attribute of such an autonomous system is the ability to make predictions about the actions and intentions of other agents in a dynamic environment, and to adapt its own decisions accordingly. This kind of ability is especially important when robots are called upon to work closely together with human users and operators. I will begin my talk by briefly describing some robotic systems we have built that exhibit this ability. This includes mobile robots that can navigate in crowded spaces and humanoid robots that can cooperate with human co-workers. Underpinning such systems are a variety of algorithmic tools for behaviour prediction, categorisation and decision-making. I will present some recent results from my group’s work in this area. Firstly, using the domain of spatial navigation, I will discuss a framework for planning in the presence of other agents, based on counterfactual reasoning about models of others’ behaviour. The robotic system also integrates a light-weight motion model and distributed visual tracking to enable fast and scalable motion planning. I will then outline a model for ad hoc multi-agent interaction without prior coordination. By conceptualizing the interaction as a stochastic Bayesian game, the choice problem is formulated in terms of types in an incomplete information game, allowing for a learning algorithm that combines the benefits of Harsanyi’s notion of types and Bellman’s notion of optimality in sequential decisions. The theoretical arguments will be supported by some preliminary results from experiments involving human-machine interaction, such as in prisoner’s dilemma, where we show a better rate of coordination than alternate multi-agent learning algorithms.