In computer science, an agent can be thought of as a computational entity that repeatedly perceives the environment, and takes action so as to optimize long term reward. We consider intelligence to be the ability of an agent to achieve goals in a wide range of environments (Legg & Hutter). Thinking in evolutionary/ecological terms, the richest environments for a given agent are themselves evolving collections of agents. These could be biological organisms, or companies within a given market. In this lecture, Thore will discuss the important role multi-agent learning has to play in artificial intelligence research and the challenges it presents. Specifically, he will discuss two example projects from multi-agent learning work at DeepMind. Firstly, Thore will show how to use advances in deep reinforcement learning to study the age-old question of how cooperation arises among self-interested agents. By defining Sequential Social Dilemmas, this work goes beyond simple matrix games such as the famous game theory example of the Prisoner’s Dilemma, and can model new aspects of social dilemmas such as temporal dynamics and coordination problems. Secondly, Thore will discuss the AlphaGo project, in which DeepMind used the multi-agent algorithm of Learning from Self-Play to create the first computer program to beat a top professional Go player at the full-size game of Go, a feat thought to be at least a decade away by Go and AI experts alike.